Sudip Sengupta – BMC Software | Blogs https://s7280.pcdn.co Fri, 08 Mar 2024 15:38:29 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Sudip Sengupta – BMC Software | Blogs https://s7280.pcdn.co 32 32 Docker CMD vs. ENTRYPOINT: What’s the Difference and How to Choose https://s7280.pcdn.co/docker-cmd-vs-entrypoint/ Fri, 08 Mar 2024 08:50:46 +0000 https://www.bmc.com/blogs/?p=49213 CMD and ENTRYPOINT are two Dockerfile instructions that together define the command that runs when your container starts. You must use these instructions in your Dockerfiles so that users can easily interact with your images. Because CMD and ENTRYPOINT work in tandem, they can often be confusing to understand. This article helps eliminate any potential […]]]>

CMD and ENTRYPOINT are two Dockerfile instructions that together define the command that runs when your container starts. You must use these instructions in your Dockerfiles so that users can easily interact with your images. Because CMD and ENTRYPOINT work in tandem, they can often be confusing to understand. This article helps eliminate any potential disparity in this realm.

In a cloud-native setup, Docker containers are essential elements that ensure an application runs effectively across different computing environments. These containers are meant to carry specific tasks and processes of an application workflow and are supported by Docker images.

The images, on the other hand, are run by executing Docker instructions through a Dockerfile. There are three types of instructions (commands) that you use to build and run Dockerfiles:

  • RUN. Mainly used to build images and install applications and packages, RUN builds a new layer over an existing image by committing the results.
  • CMD. Sets default parameters that can be overridden from the Docker Command Line Interface (CLI) when a container is running.
  • ENTRYPOINT. Default parameters that cannot be overridden when Docker Containers run with CLI parameters.

Any Docker image must have an ENTRYPOINT or CMD declaration for a container to start. Though the ENTRYPOINT and CMD instructions may seem similar at first glance, there are fundamental differences in how they build container images.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Shell form vs. executable form

First, we need to understand how a Docker Daemon processes instructions once they are passed.

All Docker instruction types (commands) can be specified in either shell or exec forms. Let’s build a sample Dockerfile to understand these two commands.

(Explore more Docker commands.)

Shell command form

As the name suggests, a shell form of instructions initiate processes that run within the shell. To execute this, invoke /bin/sh -c <command>.

Typically, every execution through a shell command requires environment variables to go through validation before returning the results.

Syntaxes of shell commands are specified in the form:
<instruction> <command>
Examples of shell form commands include:

RUN         yum -y update
RUN         yum -y install httpd
COPY        ./index.html/var/www/index.html
CMD         echo “Hello World”

A Dockerfile named Darwin that uses the shell command will have the following specifications:

ENV name Darwin
ENTRYPOINT /bin/echo "Welcome, $name"

(The command specifications used above are for reference. You can include any other shell command based on your own requirements.)

Based on the specification above, the output of the docker run-it Darwin command will be:

Welcome, Darwin

This command form directs the shell to go through validation before returning results, which often leads to performance bottlenecks. As a result, shell forms are usually not a preferred method unless there are specific command/environment validation requirements.

Executable command form

Unlike the shell command type, an instruction written in executable form directly runs the executable binaries, without going through shell validation and processing.

Executable command syntaxes are specified in the form:

<instruction> [“executable”, “parameter 1”, “parameter 2”, …]

Examples of executable commands include:

RUN ["yum", "-y", "update"]
CMD ["yum", "-y" "install" "httpd"]
COPY ["./index.html/var/www/index.html"]

To build a Dockerfile named Darwin in exec form:

ENV name Darwin
ENTRYPOINT ["/bin/echo", "Welcome, $name"]

Because this avoids a shell processing, the output of the docker run -it Darwin command will be returned as: Welcome, $name.

This is because the environment variable is not substituted in the Dockerfile. To run bash in exec form, specify /bin/bash as executable, i.e.:

ENV name Darwin
ENTRYPOINT ["/bin/bash", “-c” "echo Welcome, $name"]

This prompts shell processing, so the output of the Dockerfile will be: Welcome, Darwin.

Commands in a containerized setup are essential instructions that are passed to the operating environment for a desired output. It is of utmost importance to use the right command form for passing instructions in order to:

  • Return the desired result
  • Ensure that you don’t push the environment into unnecessary processing, thereby impacting operational efficiency

Interested in Enterprise DevOps? Learn more about DevOps Solutions and Tools with BMC. ›

CMD vs. ENTRYPOINT: Fundamental differences

CMD and ENTRYPOINT instructions have fundamental differences in how they function, making each one suitable for different applications, environments, and scenarios.

They both specify programs that execute when the container starts running, but:

  • CMD commands are ignored by Daemon when there are parameters stated within the docker run command.
  • ENTRYPOINT instructions are not ignored, but instead, are appended as command-line parameters by treating those as arguments of the command.

Next, let’s take a closer look. We’ll use both command forms to go through the different stages of running a Docker container.

Docker CMD

Docker CMD commands are passed through a Dockerfile that consists of:

  • Instructions on building a Docker image
  • Default binaries for running a container over the image

With a CMD instruction type, a default command/program executes even if no command is specified in the CLI.

Ideally, there should be a single CMD command within a Dockerfile. For example, where there are multiple CMD commands in a Dockerfile, all except the last one are ignored for execution.

An essential feature of a CMD command is its ability to be overridden. This allows users to execute commands through the CLI to override CMD instructions within a Dockerfile.

A Docker CMD instruction can be written in both Shell and Exec forms as:

  • Exec form: CMD [“executable”, “parameter1”, “parameter2”]
  • Shell form: CMD command parameter1 parameter2

Stage 1. Creating a Dockerfile

When building a Dockerfile, the CMD instruction specifies the default program that will execute once the container runs. A quick point to note: CMD commands will only be utilized when command-line arguments are missing.

We’ll look at a Dockerfile named Darwin with CMD instructions and analyze its behavior.

The Dockerfile specifications for Darwin are:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
CMD ["echo", "Hello, Darwin"]

The CMD instruction in the file above echoes the message Hello, Darwin when the container is started without a CLI argument.

Stage 2. Building an image

Docker images are built from Dockerfiles using the command:

$ docker build -t Darwin .

The above command does two things:

  • Tells the Docker Daemon to build an image
  • Sets the tag name to Darwin located within the current directory

Stage 3. Running a Docker container

To run a Docker container, use the docker run command:

$ docker run Darwin

Since this excludes a Command-line argument, the container runs the default CMD instruction and displays Hello, Darwin as output.

If we add an argument with the run command, it overrides the default instruction, i.e.:

$ docker run Darwin hostname

As a CMD default command gets overridden, the above command will run the container and display the hostname, thereby ignoring the echo instruction in the Dockerfile with the following output:

6e14beead430

which is the hostname of the Darwin container.

When to use CMD

The best way to use a CMD instruction is by specifying default programs that should run when users do not input arguments in the command line.

This instruction ensures the container is in a running state by starting an application as soon as the container image is run. By doing so, the CMD argument loads the base image as soon as the container starts.

Additionally, in specific use cases, a docker run command can be executed through a CLI to override instructions specified within the Dockerfile.


Ready to take your IT Service Management to the next level? BMC Helix ITSM can help. ›

Docker ENTRYPOINT

In Dockerfiles, an ENTRYPOINT instruction is used to set executables that will always run when the container is initiated.

Unlike CMD commands, ENTRYPOINT commands cannot be ignored or overridden—even when the container runs with command line arguments stated.

A Docker ENTRYPOINT instruction can be written in both shell and exec forms:

  • Exec form: ENTRYPOINT [“executable”, “parameter1”, “parameter2”]
  • Shell form: ENTRYPOINT command parameter1 parameter2

Stage 1. Creating a Dockerfile

ENTRYPOINT instructions are used to build Dockerfiles meant to run specific commands.

These are reference Dockerfile specifications with an Entrypoint command:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
ENTRYPOINT ["echo", "Hello, Darwin"]

The above Dockerfile uses an ENTRYPOINT instruction that echoes Hello, Darwin when the container is running.

Stage 2. Building an Image

The next step is to build a Docker image. Use the command:

$ docker build -t Darwin .

When building this image, the daemon looks for the ENTRYPOINT instruction and specifies it as a default program that will run with or without a command-line input.

Stage 3. Running a Docker container

When running a Docker container using the Darwin image without command-line arguments, the default ENTRYPOINT instructions are executed, echoing Hello, Darwin.

In case additional command-line arguments are introduced through the CLI, the ENTRYPOINT is not ignored. Instead, the command-line parameters are appended as arguments for the ENTRYPOINT command, i.e.:

$ docker run Darwin hostname

will execute the ENTRYPOINT, echoing Hello, Darwin and then displaying the hostname to return the following output:

Hello, Darwin 6e14beead430

When to use ENTRYPOINT

ENTRYPOINT instructions are suitable for both single-purpose and multi-mode images where there is a need for a specific command to always run when the container starts.

One of its popular use cases is building wrapper container images that encapsulate legacy programs for containerization, which leverages an ENTRYPOINT instruction to ensure the program will always run.

Using CMD and ENTRYPOINT instructions together

While there are fundamental differences in their operations, CMD and ENTRYPOINT instructions are not mutually exclusive. Several scenarios may call for the use of their combined instructions in a Dockerfile.

A very popular use case for blending them is to automate container startup tasks. In such a case, the ENTRYPOINT instruction can be used to define the executable while using CMD to define parameters.

Let’s walk through this with the Darwin Dockerfile, with its specifications as:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
ENTRYPOINT ["echo", "Hello"]CMD [“Darwin”]

The image is then built with the command:

$ docker build -t darwin .

If we run the container without CLI parameters, it will echo the message Hello, Darwin.

Appending the command with a parameter, such as Username, will override the CMD instruction, and execute only the ENTRYPOINT instruction using the CLI parameters as arguments. For example, the command:

$ docker run Darwin User_JDArwin

will return the output:

Hello User_JDArwin

This is because the ENTRYPOINT instructions cannot be ignored, while with CMD, the command-line arguments override the instruction.


Scale operational effectiveness with an artificial intelligence for IT operations. Learn more about AIOps with BMC! ›

Using ENTRYPOINT or CMD

Both ENTRYPOINT and CMD are essential for building and running Dockerfiles—it simply depends on your use case. As a general rule of thumb:

  • Use ENTRYPOINT instructions when building an executable Docker image using commands that always need to be executed.
  • Use CMD instructions when you need an additional set of arguments that act as default instructions until there is explicit command-line usage when a Docker container runs.

A container image requires different elements, including runtime instructions, system tools, and libraries to run an application. To get the best out of a Docker setup, it is strongly advised that your administrators understand various functions, structures, and applications of these instructions, as they are critical functions that help you build images and run containers efficiently.

Related reading

]]>
Enterprise Service Management vs ITSM: What’s The Difference? https://www.bmc.com/blogs/itsm-vs-enterprise-service-management/ Thu, 31 Mar 2022 13:17:54 +0000 https://www.bmc.com/blogs/?p=51925 Service management is a set of processes used in designing, operating, and controlling the delivery of IT services. With a combination of tools, processes, and people, Service Management provides a framework for organizations to deliver IT services while enabling collaboration between internal cross-functional teams and the clients. As an important aspect of digital transformation, the […]]]>

Service management is a set of processes used in designing, operating, and controlling the delivery of IT services. With a combination of tools, processes, and people, Service Management provides a framework for organizations to deliver IT services while enabling collaboration between internal cross-functional teams and the clients.

As an important aspect of digital transformation, the key to successful service management is the continuous, streamlined approach to maintaining and delivering IT services. While there are several frameworks that organizations can leverage to improve their service management practices, it’s crucial to follow the best practices after diligently referring to an organization’s use case. With that in mind, the following points typically form the basis of creating an effective service management framework. An adopted framework should:

  • Account for the current state of your organization’s environment.
  • Consider emerging technologies and practices to ensure that your employees are getting the most out of your IT services.
  • Deliver the customers’ expectations.

In this article, we delve into two of the service management approaches and their major differences : IT Service Management (ITSM) and Enterprise Service Management (ESM).

What is IT Service Management ITSM?

IT Service Management (ITSM) is a strategic approach to deliver IT as a service. It comprises a set of workflows and tools for optimally creating, implementing, delivering, and managing IT services for customers focusing on customer needs. The goal of ITSM is to provide processes and tools to IT teams to help them manage end to end IT services while improving business performance, increasing productivity, and enhancing customer satisfaction.

Additionally, ITSM focuses on facilitating the core IT functions of an organization, helping the business achieve its goals while managing costs by utilizing the IT budget maximally.

ITSM benefits businesses in the following ways:

  • Supports agility and adaptability by helping to innovate faster to handle market changes
  • Better productivity and faster incident resolution
  • Preemptive anticipation and resolution of issues before they occur
  • Improved performance across the board as a result of increased IT availability

ITSM focuses on optimizing IT for the organization, and there are different frameworks of best practices, processes, and tools that come into play to achieve this.

ITSM Processes

Although technology plays an integral role in delivering IT services, some procedures must be followed to ensure efficient IT service delivery encompassed in ITSM processes. Some of these core ITSM processes include:

  • Change Management helps handle all IT infrastructure changes to provide transparency and prevent bottlenecks.
  • Incident Management is concerned with responding to and resolving service issues or incidents promptly and appropriately to reduce downtime or service interruption.
  • IT Asset Management involves accounting for, deploying, maintaining, upgrading, and disposing of an organization’s assets in a timely manner.
  • Knowledge Management creates, uses, manages, and shares the information assets of an organization to achieve business objectives.
  • Problem Management assists with locating and figuring out approaches to eliminate the underlying causes of incidents to avoid repeat occurrences.
  • Service Request Management is concerned with managing all IT service requests like access requests, software and hardware upgrades.

ITSM Frameworks

ITSM frameworks are best practices and formalized guidelines that provide a systematic approach to implementing ITSM for organizations. These help organizations set their ITSM strategy while monitoring how they implement the chosen strategy. However, a crucial thing to note is that these frameworks are not ‘rules’ to follow strictly and are open to interpretation.

There are different ITSM frameworks as below, and organizations can combine them to achieve varied IT service delivery needs:

  • ITIL®. ITIL is the most popular, globally recognized, and widely adopted framework for ITSM. It emphasizes a comprehensive approach to ITSM based on principles such as focusing on value, keeping it simple and practical, while allowing progressive iterations with feedback.
  • DevOps focuses on achieving faster service delivery by applying agile practices to foster collaboration between cross-functional teams.
  • ISO 20000. A global IT service management standard that helps organizations establish IT service management processes to align with their business needs and international practice.
  • Control Objectives for Information and Related Technologies (COBIT). COBIT is a framework developed by ISACA as a support tool for managers that helps them align IT goals with business goals by implementing better IT governance.
  • The Business Process Framework (eTOM). The eTOM framework is a collection of best practices, models, and standards for IT service delivery developed by the TeleManagement Forum.
  • Others frameworks include FitSM, SAFe, and IT4IT Reference Architecture.

ITSM Tools

There are a variety of tools that enable the delivery of IT services by supporting associated tasks and workflows involved like incident, change management, service requests, etc. However, before choosing a particular ITSM solution, there are a few points to keep in mind when analyzing the options.

  • The setup and activation process: A simplified setup and activation process with intuitive steps and knowledgeable support staff enhances the adoption process.
  • User-friendliness: The ITSM application should be intuitive, easy to use, and navigate to promote full adoption.
  • Should facilitate organization-wide collaboration.
  • Should be adaptable and flexible for your business needs as they evolve.

(View the Gartner® Magic Quadrant™ for ITSM Tools.)

What is Enterprise Service Management (ESM)?

First coined in 2002 by IBM as part of their strategy to address business-centric use-cases, ESM is a set of tools, processes and methodologies that aims to offer visibility into all aspects of service management and delivery across the entire enterprise.

ESM utilizes ITSM capabilities and processes in business areas of the organization, going beyond IT, to enhance operational efficiency and service delivery.  It essentially takes service management principles, structures, and technologies from the ITSM concept and applies them to various business verticals of the organization such as Human Resources (HR), Legal, Finance, Marketing, etc.

(Read our full explainer on ESM.)

Enterprise Service Management benefits

ESM vs ITSM similarities

As ESM acts as an extension of ITSM, both frameworks share fundamental similarities in how they operate. Though ESM inherits all benefits of ITSM, in particular, ESM offers the following benefits:

  • Cost-saving through efficient workflows
  • Increased customer satisfaction through continuous feedback and support
  • Building long term relationships with customers

ESM vs ITSM differences

At the same time, there are a few differences that can be summed up under the scope of application of IT to service management.

  • The key difference lies in the fact that ITSM primarily focuses on organizational processes related to IT services. These include core IT services such as system upgrades, access control, and application deployment. ESM, on the other hand, applies to a wide range of organizational processes going beyond IT such as human resources, employee onboarding processes, customer service or resource procurement.
  • ITSM focuses on the technical aspects of IT operations while ESM focuses on the business-oriented use cases.
  • ESM can address non-technical needs that ITSM might not cover. An example of this need could be the need for human resources to maintain a specified amount of data privacy. Another example is the need to ensure safety and compliance regulations such as HIPAA, GDPR, etc.

Summary

Service management is an integral part of an organization’s operations, either at the IT level or a company-wide level, that ensures optimum performance, efficient & effective delivery, and ultimately offers enhanced business value.

To have better planning and management, IT operations are typically broken down into multiple services. Service management ensures smooth delivery of these qualitative and standardized services that support organizational activities. Efficient and effective service management frameworks not only facilitates organizations to adapt better to changing customer requirements but also helps in developing long term relationships with the customers.

The ITSM approach addresses service management at the IT level while ESM extends the principles and processes of service management to various departments and business verticals of an organization. Though the goals of ESM and ITSM overlap to an extent, both adopt similar processes and tools to manage services of an organization.

Related reading

 

]]>
The State of the Cloud in 2022 https://www.bmc.com/blogs/state-of-cloud/ Fri, 29 Oct 2021 11:23:34 +0000 https://www.bmc.com/blogs/?p=18278 The use of cloud computing platforms has seen a significant rise in modern application development, with cloud infrastructure configuration accounting for up to 33% of global IT spending between 2017-2018. So, let’s ask the question: five years later, what is the state of the cloud? Let’s take a look! State of cloud computing today As […]]]>

The use of cloud computing platforms has seen a significant rise in modern application development, with cloud infrastructure configuration accounting for up to 33% of global IT spending between 2017-2018. So, let’s ask the question: five years later, what is the state of the cloud?

Let’s take a look!

State of cloud computing today

As with all new technologies, the cloud wasn’t adopted and implemented overnight. It took time for people to grasp what the cloud was—and even more time for tech companies to build out the infrastructure and technologies that would support the cloud. Taking it from a concept to a fully realized business offering is no small task.

(See how emerging technology cycles through hype.)

In the modern IT landscape, cloud computing has been gaining steady adoption due to its numerous benefits—like agility, scalability, and high performance.

By allowing resource provisioning in response to changes in workload requirements, cloud computing lets organizations configure flexible architecture to handle dynamic applications. Cloud platforms also enable:

  • Rapid product testing and delivery
  • Elastic global scaling
  • Better overall application performance
  • Developer productivity

Today, organizations around the world have harnessed cloud computing technology to provide a massive and diverse array of service offerings. Enterprises are increasingly adopting cloud computing services:

Enterprises continue to accelerate public cloud adoption, with 36% of organizations spending over $12 million per year on public clouds.

It’s clear that the cloud has become a technology that organizations can’t afford to ignore. Making the most of the cloud is no simple task, but many organizations are finding ways to optimize their cloud spend without decimating their budgets. The massive array of offerings leveraging the power of the cloud continues to grow as more businesses experiment with the new technology, finding ways to maximize its potential.

Cloud computing trends 2022

Here are today’s emerging trends—see how they highlight the prominence of cloud computing in today’s rapidly changing tech landscape.

  • Multi- and/or hybrid cloud environments. An increasingly popular choice, that is being adopted by organizations to support workloads that require a vendor-agnostic, multi-platform approach.
  • Edge computing. A decentralization trend in which compute and storage elements are deployed closer to the devices for reduced latency and bandwidth optimization.
  • Serverless computing. This computing model involves several levels of backend-as-a-service offerings that let developers code freely while cloud providers manage platform implementation.
  • Data democratization. A system that allows user to access, explore and analyze data to aid in decision-making for data-driven applications

(Explore cloud growth, stats & more.)

Cloud computing concepts

Use cases of the cloud are extensive, as it is used across a wide spectrum of individual users, private organizations, and federal alike. The basic foundation of cloud computing is that the intricacies of infrastructure implementation and the service location are hidden from the end-user.

  • For individuals, the cloud allows users to access information or SaaS-based applications from any device connected to the internet without the fear of local loss of data or the requirement of installation and regular updating.
  • While for organizations, cloud computing offers an agile and highly scalable platform to host applications and offer services.

While use cases differ, the following section explores how a typical cloud computing framework functions, and different cloud-based models available to organizations solving varying purposes.

How cloud computing works

Most trends in modern application development, such as containers and CI/CD pipelines are considerable factors that are pushing organizations to adopt the cloud. While there’s no denying that the cloud has become a major part of IT infrastructure for the majority of companies, it can be difficult to understand what exactly this means from an operational perspective.

With cloud computing, an application’s backend—the primary group of computing components responsible for storing and processing data—is hosted on remote servers. These servers are exposed to the internet through pre-defined rule sets known as protocols.

The users then access the application through the front end: a browser or application endpoint with permissions to communicate with the server. The cloud service provider, managed either in-house or by an external third party vendor, manages the infrastructure required to run the server and related components, so that the application’s users only have to connect with its services using an internet connection.

(Read about cloud infrastructure.)

Types of cloud computing offerings

A typical application delivery contains numerous components requiring provisioning, configuration, and management effort. To help with this, cloud-based services are offered through different business models that are focused on a specific component as the primary offering.

Let’s look at the most common three—IaaS, PaaS, and SaaS.

(Looking for more? Read our in-depth explainer on all three.)

Infrastructure-as-a-Service (IaaS)

In an IaaS offering, the cloud service provider manages the basic IT infrastructure such as servers, storage, and networking. This allows the ‘user organization’ to leverage the offered infrastructure and configure the platform of its choice for hosting applications. Features of an IaaS offering include:

  • Allows to provision development, staging, and test environments as required
  • Enables application hosting with on-demand scalability
  • Offers pre-configured data storage, backup, and recovery
  • Supports on-demand computing for high-throughput requirements such as big data, machine learning, and other complex simulations

Platform-as-a-Service (PaaS)

Beyond abstracting the physical computing infrastructure, PaaS cloud offerings provide a dynamic, on-demand platform that supports application delivery, deployment, management, or testing. This allows developer organizations to quickly develop code and deploy it to production without having to worry about managing infrastructure or setting up deployment environments.

Software-as-a-Service (SaaS)

By far the most common offering, SaaS delivers entire applications over the internet on a subscription basis. In such offerings, the cloud provider will:

  • Host the application code and the underlying infrastructure.
  • Handle all aspects of its maintenance as per a mutual service level agreement (SLA).

Individual users can then access the software service through an endpoint such as an application or browser. Some commonly known SaaS applications include Google Workspace, Netflix, Dropbox, and YouTube.

(Check out the latest SaaS trends & news.)

Cloud computing deployment strategies

With a steady adoption rate, several types of cloud computing strategies have evolved to meet the needs of different use cases. There are three common approaches to provision a cloud infrastructure.

(Explore public, private & hybrid cloud use cases.)

Public cloud

Considered one of the most popular deployment strategies on account of its ease of management and cost benefits. In a public cloud, third-party service providers own and operate data centers while delivering computing resources over the internet. Popular public cloud platforms include:

  • Amazon Web Services (AWS)
  • Microsoft Azure
  • Google Cloud Platform (GCP)

Private cloud

A single organization purchases/configures data center resources to be managed and maintained on private networks. Private clouds can either be:

  • Physically hosted on self-managed data centers
  • Remote servers managed by third-party service providers

Though the efforts and cost towards managing a private cloud are considerably higher than a public cloud, organizations that deal with sensitive data or niche domains that cannot avail a third-party service commonly opt for a private cloud deployment model.

Hybrid/multi-cloud

Automation and orchestration technologies are used to bind together public and private clouds deployment models for the sharing of data and applications. Businesses typically use hybrid clouds to improve flexibility, access more deployment options, enforce security and compliance, and optimize existing infrastructure.

Benefits of shifting workloads to the cloud

Though use cases differ for different organizations, following are some of the most commonly known benefits of adopting a cloud service:

  • Cost optimization. With cloud computing, organizations eliminate the capital expenses of purchasing and configuring computing infrastructure, the personnel and costs of running on-premises data centers.
  • Speed. Cloud computing platforms offer services on-demand, allowing services to be deployed and applications to access production scale resources within minutes of deployment.
  • Scalability. The ability to scale up resource capacity rapidly is one of the prime benefits of leveraging the cloud. This allows an organization to provision resources based on demand spikes while keeping operating expenses low.
  • Enhanced application performance. By deploying services on globally distributed Content Distribution Networks (CDNs) and efficient load balancing mechanisms, a cloud deployment enables high availability and workload performance.
  • Reliability. Cloud computing also simplify data backup and disaster recovery, thereby assuring business continuity. Additionally, by leveraging innovative monitoring and storage strategies, cloud-based stateful workloads are considered much more resilient and reliable than those running on a monolith framework.

(Understand the effect of redundancy on availability.)

Related reading

]]>
AWS Redshift vs Snowflake: What’s The Difference & How To Choose? https://www.bmc.com/blogs/aws-redshift-vs-snowflake/ Fri, 29 Oct 2021 09:08:36 +0000 https://www.bmc.com/blogs/?p=50984 Snowflake and Amazon Redshift are two popular cloud-based data warehousing platforms that offer outstanding performance, scale, and business intelligence capabilities. Certainly, both platforms offer similar core functionalities, such as: Relational management Security Scalability Cost efficiency The key differences, however, are their pricing models, deployment options, and user experience. In this article, we’ll help you decide […]]]>

Snowflake and Amazon Redshift are two popular cloud-based data warehousing platforms that offer outstanding performance, scale, and business intelligence capabilities. Certainly, both platforms offer similar core functionalities, such as:

  • Relational management
  • Security
  • Scalability
  • Cost efficiency

The key differences, however, are their pricing models, deployment options, and user experience.

In this article, we’ll help you decide whether AWS Redshift or Snowflake is right for you. Let’s compare these two solutions based on their similarities, differences, and use cases. We also highlight how each platform addresses common challenges faced by businesses looking to implement a data warehouse.

AWS Redshift and Snowflake are two popular data warehouses. How do you know which one’s right for you? Let’s take a look.

Choosing the right data warehouse

First, let’s briefly look at data warehouses in general.

Data warehouses (DWH) are large repositories of data, collected from different data sources, which organizational typically use for analytical insights and business intelligence. An efficient data warehouse relies on an architecture that offers consistency by collecting data from different operational databases, then applying a uniform format for easier analysis and quicker insights.

One of the fundamental purposes of data warehouses is to enable quick access to historical data and context, thereby helping decision-makers to optimize strategies and improve bottom lines.

Implementing the right data warehousing solution is key to gaining a competitive advantage in today’s data-centric business world. Leveraging an efficiently provisioned business intelligence framework, a data warehouse supports business outcomes such as:

  • Increased bottom line
  • Efficient decision making
  • Enhanced customer service
  • Improved analytics

The most important characteristics of any efficiently designed data warehouse is to ensure that it:

  • Has consistent schemas across different tables that returns expected results against a query.
  • Supports multi-table querying out of the box, which allows users to generate ad hoc reports without writing custom code or creating custom table views.

Some key factors to consider when selecting a warehousing platform include:

  • Business goals
  • Cost models
  • Simplicity of integration
  • Cloud-readiness
  • Adherence to security and compliance standards

What is Snowflake?

Snowflake is a cloud-based, software as a service (SaaS) data platform that allows

  • Secure data sharing
  • Unlimited scaling
  • A seamless multi-cloud experience

The platform relies on a virtual warehouse framework that leverages third party cloud-compute resources such as AWS, Azure, or GCP. The option to choose high-performance cloud platforms allows real-time auto-scaling to organizations who are looking to:

  • Run faster workloads
  • Process large query volumes on the elastic cloud

As compared to legacy DWH solutions, Snowflake offers a non-traditional approach to data warehousing by abstracting compute from storage. That means data can reside in a central repository while compute instances are sized, scaled, and managed independently.

Snowflake manages all aspects of data administration for a simpler, more flexible warehousing solution that provides various capabilities of enterprise offerings.

The Snowflake analytics platform leverages a custom SQL query engine and three-layer architecture to support real time analytics of streaming big data. Its flexible architecture allows users to build their own analytical applications without having to learn new programming languages.

(Check out our Snowflake Guide.)

Benefits of Snowflake

  • Organizations don’t need to install, configure, or manage the underlying warehouse platform, including hardware or software
  • Integrates with most components of the data ecosystem
  • Separates configuration, management and charges for storage and compute instances
  • Offers an intuitive, powerful SQL interface
  • Enables account-to-account data sharing
  • Simple to set up and use

When to use Snowflake

Snowflake is considered the perfect data warehouse solution for situations when…

  • The query load is expected to be lighter.
  • Workload requires frequent scaling.
  • Your organization requires an automated, managed solution with zero operational overhead to manage the underlying platform.

Now let’s turn to Redshift.

What is Amazon Redshift?

AWS Redshift is a data warehousing platform that uses cloud-based compute nodes to enable large scale data analysis and storage. The platform employs column-oriented databases to connect business intelligence solutions with SQL-based query engines. By leveraging PostgreSQL and Massively Parallel Processing (MPP) on dense storage nodes, the platform delivers quick query outputs on large data sets.

While offering faster query processing, Redshift also offers multiple options for efficient management of its clusters. These include:

  • Interactively using the AWS CLI or Amazon Redshift Console
  • Amazon Redshift Query API
  • AWS Software Development Kit

Amazon Redshift is a fully managed warehousing platform that allows organizations to query and combine petabytes of data with optimized price performance. The Advanced Query Accelerator (AQUA) offers a cache that boosts query operations performance by up to 10x, allowing businesses to gain new insights from every data point in the application/system.

(Explore our hands-on AWS Redshift Guide.)

Benefits of AWS Redshift

  • Offers a user-friendly console for easier analytics and query
  • A fully managed platform that requires little effort towards maintenance, upgrading and administration
  • Integrates seamlessly with the AWS services ecosystem
  • Supports multiple data output formats
  • Works seamlessly with SQL data using PostgreSQL syntax

When to use Redshift

AWS Redshift is considered the perfect data warehouse solution for situations when…

  • Your organization is already using AWS services.
  • Workloads run structured data.
  • The application has a high query load.

AWS Redshift vs Snowflake

AWS Redshift vs Snowflake: A quick comparison

Let’s look at the clear differences between the two.

  • Snowflake is a complete SaaS offering that requires no maintenance. AWS Redshift clusters require some manual maintenance
  • Snowflake separates compute from storage, allowing for flexible pricing and configuration. Redshift allows for cost optimization through Reserved/Spot instance pricing.
  • Snowflake implements instantaneous auto-scaling while Redshift requires addition/removal of nodes for scaling.
  • Snowflake supports fewer data customization choices, where Redshift supports data flexibility through features like partitioning and distribution.
  • Snowflake supports always-on encryption that enforces strict security checks while Redshift provides a flexible, customizable security model.

Similarities between Snowflake & Redshift

  • Both support Massive Parallel Processing (MPP) for faster performance.
  • Both the platforms connect BI solutions to databases using column-oriented databases.
  • Data in both warehouses is accessed using SQL based query engines.
  • Both Snowflake and Redshift are designed to abstract data management tasks so users can easily gain insights and improve system performance using data-driven decisions.

Choosing Snowflake or Redshift

In the modern data-driven world, data warehousing solutions allow organizations to store large sets of operational data and make holistic analytical decisions to improve system performance.

DWHs are designed to store vast amounts of structured or semi-structured data to provide fast retrieval times and easy analytics.

Redshift and Snowflake are two top cloud-based data warehouses that offer powerful data management and analysis options. Both the platforms also offer:

  • High availability with minimal downtime
  • Scalability through replication across multiple servers

While both the platforms are highly popular and each outclasses the other meagerly in offering benefits, the choice between the two platforms depends on business demands, resources, bundled services, and specific use cases.

Related reading

]]>
Cloud Native Security: A Beginner’s Guide https://www.bmc.com/blogs/cloud-native-security/ Wed, 01 Sep 2021 13:32:30 +0000 https://www.bmc.com/blogs/?p=50563 Cloud native systems empower organizations to build, deploy, and run scalable workloads in dynamic environments. While such environments support an agile development framework, these also bring a fresh set of security challenges that can’t be solved with traditional IT security practices. Though portability, autoscaling, and automation are key features of an efficient cloud native ecosystem, […]]]>

Cloud native systems empower organizations to build, deploy, and run scalable workloads in dynamic environments.

While such environments support an agile development framework, these also bring a fresh set of security challenges that can’t be solved with traditional IT security practices. Though portability, autoscaling, and automation are key features of an efficient cloud native ecosystem, the same features also lead to potential gaps that are susceptible to be exploited by attack vectors.

In this article, we delve into the security landscape of a cloud native system, while exploring the elements and strategies to enforce security in such frameworks.

Cloud native security overview

Cloud native applications lack fixed perimeters present in traditional IT. As a result, static firewalls rarely solve their purpose to secure applications that run on multi-cloud, on-premises, or off-premises cloud instances.

The flexible, scalable, and elastic nature of cloud environments additionally reduces the speed and accuracy with which security teams can diagnose security incidents. Combined with these are the rapid delivery and release cycles that make it complex to manage and provision security policies manually.

These factors collectively present challenges that require a non-traditional, focused approach to mitigate security events of cloud native systems.

The 4 Cs: Pillars of Cloud Native Security

Pillars of cloud native security

An effective cloud native security model addresses threats across every level of a workflow—simply remember the 4 Cs:

Code

Analyzing, debugging, and cleaning up source code is the first step to identify and fix vulnerabilities such as Cross-Site Scripting (XSS) and SQL Injection during the build phase of a software development lifecycle (SDLC).

Some commonly used testing mechanisms to securing source code include:

  • Static Code Analysis (SCA)
  • Dynamic Application Security Testing (DAST)
  • Static Application Security Testing (SAST)

Container

Containers host application workloads and are considered one of the most critical elements of a cloud native setup.

It is extremely critical to not only secure application workloads of a cloud native ecosystem, but also to secure the containers that host these workloads. Some common approaches to securing containers include:

  • Minimizing the use of privileged containers
  • Strengthening container isolation
  • Continuous vulnerability scanning for container images
  • Certificate signing for images

(Explore security in Docker & Kubernetes.)

Cluster

Containers running at scale are deployed on physical/virtual machine clusters. A cluster typically includes various components, such as worker/master nodes, control plane, policies, and services.

Securing cluster components commonly require the following practices:

  • Administering robust Pod and Network security policies
  • RBAC authorization
  • Optimum cluster resource management
  • Securing Ingress using TLS secure keys

Cloud

The cloud layer acts as the interface that communicates with the external world, including users, third-party plugins, and APIs. Vulnerabilities on a cloud layer are bound to cause a major impact on all services, processes and applications that are hosted within it.

It is extremely critical for security teams to adopt security best practices and develop a threat model that focuses particularly on the cloud infrastructure layer and its components. Some commons practices to secure the cloud layer includes:

  • Encrypting ETCD data at REST (Kubernetes)
  • Frequently rotating and renewing CA certificates
  • Limiting the use of privileged access
  • Disabling public access

Key elements of a cloud native security platform

Cloud native security tools have gradually evolved from rudimentary collections of multiple tools and dashboards to well-defined platforms that consider all layers of the ecosystem.

A cloud native security platform (CNSP) focuses on the following elements of a tech stack to administer a comprehensive secure framework:

  • Resource inventory. The CNSP maintains asset logs in the SDLC and keeps track of all the changes for automatic resource management.
  • Network security ingests logs of traffic flow directly from the deployment platforms and develops a deep understanding of cloud native firewall rules to scan and monitor network threats.
  • Compliance management supports different major compliance frameworks to monitor security posture and compliance throughout the cloud framework.
  • Data security utilizes out-of-the-box classification rules to scan for malware, monitor regulatory compliance, and ensure data compliance across deployment environments.
  • Workload security secures application workloads by proactively mitigating runtime threats of production instances.
  • Identity & access management (IAM) administers robust access and authentication framework to secure user accounts as the first line of defense by leveraging multiple third-party tools.
  • Automatic detection, identification & remediation supports robust threat modelling by utilizing historical data and the existing security landscape of the industry.
  • Vulnerability management identifies and secures vulnerable points of the entire stack from a holistic standpoint.

Administering cloud native security

The fundamental benefit of leveraging a CNSP to administer security is that it gives organizations the freedom to choose a security stack to suit the organization’s specific use case.

Before choosing a CNSP, however, it’s important that the organization performs appropriate due diligence to opt for the right strategy and factors in the best practices for a comprehensive robust security framework.

Cloud native security strategies

Cloud native security is typically administered by opting for the strategy that supports the business-to-vendor working model while ensuring comprehensive security across various layers and processes of the tech stack. Some commonly used cloud native security strategies include:

  • The Shared Responsibility Model leverages the involvement of both the cloud service provider(s) and an organization’s in-house security team to ensure application security. This is done by assigning and sharing ownership of maintaining security for individual components of a cloud native framework. Though this model typically gives the advantage of planning the security framework inside-out, it may often get complicated in multi-cloud environments due to variations in component ownership.
  • Multi-Layered Security, also referred to as the ‘defensive depth’ approach, involves monitoring all layers of the network to identify and mitigate potential threats individually. The strategy essentially relies on a number of different tools and approaches to counter attacks alongside planning contingency in the event of a compromise.
  • Cloud-Agnostic Security is commonly used for multi-cloud models by leveraging a common CNSP for multiple cloud service providers. The strategy essentially provisions a single pane of glass of security best practices to be followed by multiple parties and distributed teams to streamline monitoring, compliance, and disaster recovery.

Benefits of cloud native security platforms

Modern CNSPs combine automation, intelligence, data analytics, and threat detection to mitigate security gaps in highly distributed cloud instances. Besides enabling a robust security framework, some additional benefits of adopting a cloud native security platform include:

  • Improved visibility & monitoring. Cloud native security platforms enable continuous testing across all CI/CD layers, allowing teams to monitor and mitigate security incidents at the component level.
  • Platform flexibility. By supporting TLS across a multi-cloud and hybrid deployment environment, CNSP allows a platform-agnostic development model.
  • Enhanced backup & data recovery. Automation enforced by CNSPs enable rapid patch deployments and mitigations of security threats

Cloud native is already here

A Fortinet survey indicates that 33% of surveyed businesses already run more than half of their workloads on the cloud. Out of all the benefits these organizations gain, security continues to be a major challenge they face. In this context, organizations must also realize that most security failures occur due to security misconfiguration—not inherent architectural vulnerabilities.

A Gartner’s report validates this by claiming that through 2025, 99% of cloud security failures will be the customer’s fault. This exposes the outright failure of organizations to adopt the right practices and tools to mitigate avoidable attacks.

To measure an application’s success, security should no longer be an afterthought. It’s as critical as scalability and agility.

Related reading

]]>
IT Infrastructure Automation: A Beginner’s Guide https://www.bmc.com/blogs/it-infrastructure-automation/ Mon, 30 Aug 2021 09:49:26 +0000 https://www.bmc.com/blogs/?p=50523 Modern applications are dynamic, constantly growing to accommodate large volumes of data generated by users and devices. This typically requires software teams to constantly provision new deployment environments or reconfigure existing ones to keep the applications running smoothly. Unfortunately, manual provisioning and infrastructure configuration is: Painstakingly slow Inefficient Prone to failure from human error This […]]]>

Modern applications are dynamic, constantly growing to accommodate large volumes of data generated by users and devices. This typically requires software teams to constantly provision new deployment environments or reconfigure existing ones to keep the applications running smoothly. Unfortunately, manual provisioning and infrastructure configuration is:

  • Painstakingly slow
  • Inefficient
  • Prone to failure from human error

This is where infrastructure automation comes in.

This article explores why software organizations need infrastructure automation, and how it helps improve technical and business outcomes.

What is infrastructure automation?

IT infrastructure automation aims to simplify IT operations while improving speed and agility by enabling software teams to perform various management tasks with minimal human intervention. By reducing the manual effort involved in provisioning and managing workloads, IT automation enables teams to focus on strategic processes that add business value.

(Read our IT infrastructure primer.)

Basics of IT automation

A typical IT ecosystem contains a magnitude of components distributed across various layers of IT infrastructure. These components require numerous repetitive and manual processes to manage, maintain, and update.

To deal with this, organizations automate processes that improve the speed and agility of product delivery without raising the cost and complexity of running infrastructure. IT automation strategies also include standardization of compliance policies, which enforce mandatory regulations and reduce security attack surfaces.

Embracing automation is considered as crucial as comprehensive digital transformation—without it, you risk your competitive edge.

Benefits of infrastructure automation

Automation eliminates manual provisioning and handling of underlying infrastructure processes, enabling the rapid development of secure, scalable applications. Some benefits of infrastructure automation include:

  • Reducing human error. Automation eliminates vulnerabilities typically associated with human error during manual provisioning. By reducing manual efforts, IT teams focus on core development and innovation rather than investing efforts on iterative processes.
  • Reducing infrastructure complexity. Automation reduces the cost and effort to implement and manage IT infrastructure. By reducing the administrative burden of performing repetitive tasks, operations teams deal with known complexities, a predictive framework that allows operations teams to optimize infrastructure for an enhanced user experience.
  • Enhancing workflows. Automation allows for repeatability, predictability, and accuracy when performing IT provisioning tasks. Operations teams only need to set the desired conditions for the provisioning of infrastructure, while automation tools execute the tasks needed when the right conditions are met.
  • Speeding up delivery and deployment. By autonomously executing repetitive workflows across multiple machines, automation significantly reduces the time taken to configure IT infrastructure. This means teams can develop products faster and reduce the overall time-to-market.

How infrastructure automation works

Automating infrastructure can seem daunting. Fortunately, several factors that ensure the key features of IT infrastructure remain intact.

Standard operating environments

The first step to automating infrastructure is to define standard operating environments (SOEs) for servers and workstations. An SOE defines a specific operating system combined with the associated software and hardware configurations needed to deploy and run application workloads within the organization’s IT ecosystem. An SOE definition typically considers the following components:

  • Operating system
  • Service packs
  • Common applications
  • Associated dependencies

By making IT infrastructure management processes predictable and repeatable, SOEs enforce a common standard for consistent and timely maintenance.

Infrastructure as code

One critical aspect of automation is to abstract the management of underlying infrastructure using the same principles that govern coding in DevOps—a concept known as infrastructure as code (IaC).

This allows software teams to create target environments using configuration files of pre-defined formats such as JSON or YAML. These machine-readable files rely on declarative or imperative commands to manage policies through centralized templates and automation libraries, thereby simplifying resource and application configuration.

Infrastructure as Code also helps organizations achieve uniformity across different deployment environments, allowing for simple automation of multi-cloud or hybrid deployments.

IT Infrastructure Processes to Automate

How to automate IT infrastructure

Automation helps reduce the cost and complexity of running IT infrastructure—yet doing so requires careful planning, since not all components of the ecosystem can be automated. This section explores the IT processes that organizations typically automate. Automation is mostly used to fast-track tasks that are:

  • Repetitive
  • Well-documented
  • Self-contained
  • Tedious

While use cases differ for different organizations, an organization with a typical IT setup automates the following processes:

Orchestration

Robust automation solutions include capabilities to:

  • Automate different processes
  • Manage configurations across multiple nodes/machines

(Compare automation to orchestration.)

Resource provisioning

Automation works in collaboration with orchestration to run software defined networks, storage devices, virtual machines, and data centers for seamless workload handling. While doing so, automation tools enable systems to meet business demands by autonomously scaling resource capacity across multiple environments.

Configuration management

Automation enables teams to efficiently unify operations across disparate machines and deployment environments since it allows staff to define infrastructure as code. With automation tools, teams simplify configuration management using predefined scripts and best practices that are shared throughout the organization.

IT migration

Automation allows companies to move operating systems, applications, and data faster and smoother than manual processes since deployment relies on standard operating environments.

Application deployment

Automated systems perform essential testing tasks while also allowing for seamless CI/CD by providing a repetitive, proven, and secure approach of moving from commit to build to testing to deployment.

(Build your own CI/CD pipeline.)

Security & compliance

Automation lets security teams define compliance and risk management policies using IaC declarations, then automatically uses them as automatic guidelines when provisioning infrastructure.

Popular IT infrastructure automation solutions

Of course, tools can help any organization jumpstart the automation journey. Some of the most popular automation tools include:

  • Ansible. An open-source enterprise automation tool that uses collections of pre-composed content for quick implementation of automation projects.
  • Terraform. A declarative, open-source coding tool that uses pre-configured modules to allow the configuration of multiple, clustered infrastructure resources.
  • Puppet. An open-source server management and configuration tool that uses a Domain-Specific Language and desired state management for automation.
  • Chef. An enterprise dashboard and analytic platform that allows for complete code visibility and IT automation through collaboration and real-time resource scaling.
  • Saltstack. A data-driven remote execution and orchestration tool that also allows for infrastructure management and automation.
  • Cloudformation. An IaC platform that uses templates to allow for modelling of related AWS resources, managing their lifecycles and quick provisioning.

(Automate infrastructure with these BMC solutions.)

Infrastructure (hyper)automation

Efficiency of an infrastructure is often measured by the levels of automation and reduced human interaction. While lifecycle of certain processes are limited, the rule of thumb for automation is always to scope those that are repetitive or requires a standardized set of steps.

Infrastructure adoption is on the rise, with Gartner predicting that organizations will refocus over 30% of their IT operations on analytics and automation capabilities—something known as hyperautomation. With more firms adopting hybrid and multi-cloud deployments, the need for enterprise automation solutions to help manage infrastructure provisioning continues to grow.

Hyper Automation

Related reading

]]>
SRE vs DevOps: What’s The Difference? https://www.bmc.com/blogs/sre-vs-devops/ Thu, 26 Aug 2021 00:00:48 +0000 https://www.bmc.com/blogs/?p=15701 With the growing complexity of application development, organizations are increasingly adopting methodologies that enable reliable, scalable software. DevOps and site reliability engineering (SRE) are two approaches that enhance the product release cycle through enhanced collaboration, automation, and monitoring. Both approaches utilize automation and collaboration to help teams build resilient and reliable software—but there are fundamental […]]]>

With the growing complexity of application development, organizations are increasingly adopting methodologies that enable reliable, scalable software.

DevOps and site reliability engineering (SRE) are two approaches that enhance the product release cycle through enhanced collaboration, automation, and monitoring. Both approaches utilize automation and collaboration to help teams build resilient and reliable software—but there are fundamental differences in what these approaches offer and how they operate.

So, this article delves into the purpose of DevOps and SRE. We’ll look at both approaches, including benefits, differences, and key elements.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

SRE Supports Devops

DevOps basics

DevOps is an overarching concept and culture aimed at ensuring the rapid release of stable, secure software. DevOps exists at the intersection of Agile development and Enterprise Systems Management (ESM) practices.

Early development methodologies involved development and operations teams working in silos, which led to slower development and unstable deployment environments. To solve this, the DevOps methodology integrates all stakeholders in the application into one efficient workflow which enables the quick delivery of high quality software.

By allowing communication and collaboration between cross-functional teams, DevOps also enables:

  • Reliable service delivery
  • Improved customer satisfaction

(Explore our multi-part DevOps Guide.)

DevOps practices & methods

DevOps practices are based on continuous, incremental improvements bolstered by automation. While a full-fledged automation is rarely possible, for a comprehensive automation, a DevOps methodology focuses on the following elements:

Continuous delivery & integration (CI/CD)

DevOps aims to deliver applications and updates to customers rapidly and frequently. By using CI/CD pipelines to seamlessly connect processes and practices, DevOps automates updating and releasing code into production.

CI/CD also involves continuous monitoring and deployment to ensure code consistency across various software versions and deployment environments.

(Set up your own CI/CD pipeline.)

Infrastructure as code

DevOps emphasizes the abstraction of IT infrastructure so that it can be managed using software engineering methods and provisioned automatically. This results in an efficient system that allows your team to efficiently:

  • Track changes
  • Monitor infrastructure configurations
  • Roll back changes that have undesired/unintended effects

Automated testing

Code is automatically and continuously tested while it is being written or updated. By eliminating the bottlenecks associated with pre-release testing, the continuous mechanism speeds up the deployment.

DevOps works with…

Apart from the elements that help DevOps practices enable comprehensive automation, DevOps also relies on various methods that inherently enable faster delivery, efficient automation, and enhanced collaboration. Some methodologies that DevOps uses or otherwise pairs well with include:

  • Scrum. This framework describes the composition and roles of teams collaborating to accelerate quality assurance and code development. The scrum framework defines designated roles in the project and key workflows within all phases of a software development lifecycle (SDLC).
  • Kanban. A key workflow management mechanism that enables teams to define, manage, and improve on services that deliver business value.
  • Agile. The Agile framework defines processes that improve software teams’ responsiveness to changing market needs by enabling rapid, frequent, and iterative updates. Agile enables shorter development cycles which allow for a clearer understanding of business and development goals for improved customer satisfaction.

(Compare Scrum, Kanban & Agile.)

Benefits of DevOps

DevOps reduces the complexity of managing software engineering projects through collaboration and automation. Some benefits of adopting DevOps include:

  • Ensure quicker and frequent delivery of application features that improve customer satisfaction
  • Create a balanced approach to managing an SDLC for enhanced productivity of software teams
  • Innovate faster by automating repetitive tasks
  • Remediate problems quicker and more efficiently
  • Minimize production costs by cutting down errors in maintenance and infrastructure management

Site reliability engineering (SRE) basics

SRE provides a unique approach to application lifecycle and service management by incorporating various aspects of software development into IT operations.

SRE was first developed in 2003 to create IT infrastructure architecture that meets the needs of enterprise-scale systems. With SRE, IT infrastructure is broken down into basic, abstract components that can be provisioned with software development best practices. This enables teams to use automation to solve most problems associated with managing applications in production.

SRE uses three Service Level Commitments to measure how well a system performs:

Key principles of SRE include:

Principles Of SRE

(Learn more about SRE concepts.)

The Site Reliability Engineer role

SRE essentially creates a new role: the site reliability engineer. An SRE is tasked with ensuring seamless collaboration between IT operations and development teams through the enhancement and automation of routine processes. Some core responsibilities of an SRE include:

  • Developing, configuring, and deploying software to be used by operations teams
  • Handling support escalation issues
  • Conducting and reporting on incident reviews
  • Developing system documentation
  • Change management
  • Determining and validating new features and updates

SRE tools

SRE teams rely on the automation of routine processes using tools and techniques that standardize operations across the software’s lifecycle. Some tools and technologies that support Site Reliability Engineering include:

  • Containers package applications in a unified environment across multiple deployment platforms, enabling cloud-native development.
  • Kubernetes is a popular container orchestrator that can effectively manage containerized applications running on multiple environments.
  • Cloud platforms allow you to provision scalable, flexible, and reliable applications in highly distributed environments. Popular platforms include Microsoft Azure, Amazon AWS, and Google Cloud.
  • Project planning & management tools allow you to manage IT operations across distributed teams. Some popular tools include JIRA and Pivotal Tracker.
  • Source control tools such as Subversion and GitHub erase boundaries between developers and operators, allowing for seamless collaboration and release of application delivery. Source control tools include Subversion and GitHub.

SRE vs DevOps

Both methodologies enforce minimal separation between Development and Operations teams. But we can sum up the key difference as this: DevOps focuses more on a cultural and philosophical shift, and SRE is more pragmatic and practical.

This highlights various differences in how the concepts operate, including:

  • Essence. SRE was developed with a narrow focus: to create a set of practices and metrics that allow for improved collaboration and service delivery. DevOps, on the other hand, is the collection of philosophies that enable the mindset of culture and collaboration between siloed teams.
  • Goal. Both SRE and DevOps aim to bridge the gap between development and operations, though SRE involves prescriptive ways of achieving reliability, while DevOps works as a template that guides collaboration.
  • Focus. Site reliability engineering mainly focuses on enhancing system availability and reliability while DevOps focuses on speed of development and delivery while enforcing continuity.
  • Team structure. An SRE team is composed of site reliability engineers who have a background in both operations and development. DevOps teams include a variety of roles, including QA experts, developers, engineers, SREs and many others.

(Explore DevOps team structure.)

How SRE supports DevOps principles & philosophies

SRE and DevOps are not competing methodologies. That’s because SRE provides a practical approach to solving most DevOps concerns.

In this section, let’s explore how teams use SRE to implement the principles and philosophies of DevOps:

Reducing organizational silos

DevOps works to ensure that different departments/software teams are not isolated from each other, ensuring they all work towards a common goal.

SRE enables this by enforcing the ownership of projects between teams. With SRE, every team uses the same tools, techniques, and codebase to support:

  • Uniformity
  • Seamless collaboration

Implementing gradual change

DevOps embraces slow, gradual change to enable constant improvements. SRE supports this by allowing teams to perform small, frequent updates that reduce the impact of changes on application availability and stability.

Additionally, SRE teams use CI/CD tools to perform change management and continuous testing to ensure the successful deployment of code alterations.

Accepting failure as normal

Both SRE and DevOps concepts treat errors and failure as an inevitable occurrence. While DevOps aims to handle runtime errors and allow teams to learn from them, SRE enforces error management through Service Level Commitments (SLx) to ensure all failures are handled.

SRE also allows for a risk budget that allows teams to test the limits of failure for reevaluation and innovation.

Leveraging tools & automation

Both DevOps and SRE use automation to improve workflows and service delivery. SRE enables teams to use the same tools and services through flexible application programming interfaces (APIs). While DevOps promotes the adoption of automation tools, SRE ensures every team member can access the updated automation tools and technologies.

Measure everything

Since both DevOps and SRE support automation, you’ll need to continuously monitor the developed systems to ensure every process runs as planned.

DevOps gathers metrics through a feedback loop. On the other hand, SRE enforces measurement by providing SLIs, SLOs, and SLAs to perform measurements. Since Ops are software-defined, SRE monitors toil and reliability, ensuring consistent service delivery.

Summing up DevOps & SRE

SRE and DevOps are often referred as two sides of the same coin, with SRE tooling and techniques complementing DevOps philosophies and practices. SRE involves the application of software engineering principles to automate and enhance ITOps functions such as:

  • Disaster response
  • Capacity planning
  • Monitoring

On the other hand, a DevOps model enables the rapid delivery of software products through collaboration between development and operations teams.

Over the years, out of all organizations that already have taken advantage of DevOps, 50% of companies have already adopted SRE for enhanced reliability. One reason for this is that SRE principles enable enhanced observability and control of dynamic applications that rely on automation.

At the end, the goal of both the methodologies is to enhance the end-to end cycle of an IT ecosystem—the application lifecycle through DevOps and operations lifecycle management through SRE.

“Gain insight to the capabilities necessary to attract top SRE talent and make them successful in your organization with artificial intelligence for operations (AIOps) and artificial intelligence for service management (AISM) capabilities.”

Related reading

]]>
DataOps Vs DevOps: What’s The Difference? https://www.bmc.com/blogs/devops-vs-dataops/ Wed, 04 Aug 2021 15:14:19 +0000 https://www.bmc.com/blogs/?p=50298 In today’s technology landscape, DevOps continues to be a popular methodology that needs no introduction. Given the success of DevOps, emerging methodologies have embraced DevOps features to extend into niche areas of software development. DataOps is one model that was created to help data management teams harness the power of automated data orchestration to develop […]]]>

In today’s technology landscape, DevOps continues to be a popular methodology that needs no introduction. Given the success of DevOps, emerging methodologies have embraced DevOps features to extend into niche areas of software development.

DataOps is one model that was created to help data management teams harness the power of automated data orchestration to develop intelligent, data-driven systems. The DataOps discipline introduces agile principles into data analytics, streamlining how data-centric applications are designed, developed, and maintained.

This article delves into the fundamental differences between DevOps and DataOps methodologies, similarities, and use-case benefits for each.

DataOps vs DevOps

While some presume DataOps as DevOps for Data Science, the two methodologies have differences in how they implement various stages of the development lifecycle. Though both emphasize agility and collaboration, they focus on different areas of business, and as a result utilize different approaches and pipelines.

So, let’s explore each methodology to uncover their fundamental differences.

What is DevOps?

A DevOps model implements a set of practices and tools that enable the collaboration of development and IT operations teams throughout the entire software development lifecycle (SDLC).

While doing so, DevOps extends agile methodologies by enforcing automation beyond the build phase to integrate operations as part of a seamless workflow. Automation enforced by DevOps essentially encompasses the build-test-release cycle with Continuous Integration and Development (CI/CD) pipelines, enabling quicker time-to-market without impacting code quality.

The methodology promotes randomly sequenced iteration throughout the software’s lifecycle, enabling rapid application development and delivery.

(Understand the importance of DevOps automation.)

Why people adopt DevOps

DevOps tools and processes enable quicker innovation, faster time to market, and increased bottom-line. With DevOps, organizations typically experience a number of benefits, including:

  • Improved communication and collaboration. DevOps breaks operational silos between application development, release management, and operations teams, allowing for easier collaboration and sharing of resources.
  • Cost savings. DevOps shortens delivery cycles, which reduces the cost of maintenance and upgrades by leveraging economies of scale through efficient CI/CD pipelines.
  • Reduced disaster recovery times. DevOps implements the build-test-deploy cycles in small, managed batches that help minimize the impact of deployment failures, bottlenecks, and rollbacks.

Elements of DevOps

Initially built on a collection of ideas of how to improve application development workflows, DevOps has now grown into a full-fledged ecosystem with defined standards and tools. Some critical elements of a DevOps framework include:

  • Automation. DevOps includes tools to perform repetitive and manual tasks automatically that require minimal human intervention.
  • Infrastructure-as-Code (IaC). An essential aspect of implementing DevOps is to rely on better utilization of infrastructure resources. To do so, DevOps teams manage IT infrastructure using a descriptive model of code binaries that enable deploying and testing infra components in a simulated production instance.
  • CI/CD. DevOps follows an iterative model that intertwines continuous development, testing, and deployment to enable automation, security, and agile development.

A typical DevOps workflow

DevOps is a cyclical process that constantly iterates between five stages:

  1. Business vision blueprint
  2. Build operations (Code development and testing)
  3. Integration
  4. Deploy (Release of the application)
  5. Operate (Ongoing run and maintain)

The operate phase also loops usability feedback into the workflow that helps refine the business vision for responsive and iterative application development.

What is DataOps?

Modern applications and business systems continuously generate large amounts of data. DataOps is a methodology that outlines focused practices, tools, and frameworks for enterprise data management. The methodology standardizes the technological and cultural changes that help organizations to:

  • Reduce the cost of managing data
  • Improve data quality
  • Enable faster time-to-market for data-centric applications

While doing so, DataOps bridges the gap between data collection, analysis, and data-driven decision-making, allowing organizations to efficiently deliver analytical insights for improved business value.

(Learn about BMC’s approach to DataOps.)

Hear BMC CTO Ram Chakravarti share why traditional data and analytics approaches have fallen short and how DataOps can rapidly turn new insights into fully operationalized production deliverables that unlock maximum business value.

Benefits of DataOps

DataOps aims to augment the quality of data analytics by reducing the duration of data lifecycle and improve the quality of data analytics. Some benefits of the DataOps methodology include:

  • Automating manual data collection and analytics processes
  • Continuous monitoring of the data pipeline
  • Isolation of production data
  • Centralization and sharing of data definitions
  • Enhancing the reusability of the data stack
  • Enabling controlled data access

DataOps in modern app development

Organizations adopt artificial intelligence and machine learning models into their digital products and services for enhanced analytical insights and improved customer experience. The DataOps model specifically helps data scientists, ML engineers, and analysts to create models that support the end-to-end requirements of AI/ML frameworks.

Some critical areas of modern application development that benefit from DataOps include:

  • Self-service interaction
  • Data governance and curation services
  • Log and event monitoring
  • Vulnerability scanning
  • Search and indexing
  • Market analytics

DataOps Best Practices

DevOps vs DataOps

Both DataOps and DevOps’ fundamental goal is to transform a product development lifecycle through enhanced agility and automation. Some similarities between the two methodologies include:

  • Both employ agile methods to shorten delivery lifecycles
  • Both enforce cross-functional collaboration between multiple teams
  • Both utilize a multitude of automation tools for faster development

The main differences between the two methodologies include:

Quality factor

DevOps mainly focuses on developing quality software by shortening development cycles. DataOps emphasizes the extraction of high-quality data for quicker, trustworthy BI insights.

Delivery automation

DevOps focuses on automating version and server configurations.

DataOps focuses on the automation of data acquisition, modelling, integration, and curation for high-quality data delivery.

Collaboration

DevOps seamlessly integrates development and operations teams for rapid delivery while DataOps connects business leaders, IT development, and data analytics teams for quicker data processing.

Summarizing DevOps & DataOps

The current tech landscape is highly dynamic. To gain a competitive edge, businesses rely on applications that are highly scalable, efficient, and secure.

While efficient applications act as essential enablers for overall organizational efficiency, it is equally critical for organizations to adopt the right model that helps develop applications that are agile, efficient, and secure.

Among the list of various software development methodologies, DevOps continues to be the most popular choice. On the other hand, DataOps focuses on data-based application delivery.

Both DataOps and DevOps deliver advanced innovation and competitive advantage for firms looking to improve their development lifecycles. The two methodologies, however, differ in how they implement the build-test-deploy stages of software development. Keeping the basic difference aside though, both of their goals is to enable a highly efficient, automated framework that relies on team-level collaboration and comprehensive automation of development workflows without compromising application reliability.

Related reading

]]>
ITSM vs ITOM: Service Management & Operations Management Explained https://www.bmc.com/blogs/it-service-management-vs-operations-management/ Wed, 14 Jul 2021 10:00:55 +0000 https://www.bmc.com/blogs/?p=50099 Organizations that leverage technology as an enabler for achieving efficiency—which is to say, all organizations—must diligently manage IT applications and the underlying infrastructure. To ensure business units remain operational, organizations adopt various practices to efficiently deliver IT services. These practices can: Help your organization achieve operational efficiency and workplace agility Support operational scale-up Offer critical […]]]>

Organizations that leverage technology as an enabler for achieving efficiency—which is to say, all organizations—must diligently manage IT applications and the underlying infrastructure.

To ensure business units remain operational, organizations adopt various practices to efficiently deliver IT services. These practices can:

  • Help your organization achieve operational efficiency and workplace agility
  • Support operational scale-up
  • Offer critical insights into your performance in various areas

IT service management (ITSM) and IT operations management (ITOM) are two essential practices. They sound similar in scope, and increasingly do overlap—however, they are not interchangeable. Service management and operations management are different areas of IT.

So, in this article, we’ll delve into:

  • The scope of service management and operations management within an organization
  • Service and operations management standards
  • How they overlap in functionality

ITSM Services and ITOM Operations Management

How IT service management (ITSM) works

ITSM is an IT management strategy that involves using processes, tools, and frameworks to ensure that your organization can efficiently implement, deliver, management, and support IT services, to both:

  • Internal business operations
  • End users, often external customers/clients

Increasingly shortened to simply “service management”, because practically all business services are now IT-enabled, you are practicing service management whether you have a formal structure—or not.

Traditional ITSM

By implementing a structure for handling the end-to-end delivery of IT services, organizations stand to achieve several benefits, including:

  • Increased agility and adaptability to market changes due to faster delivery, innovation, and resolution
  • Improved knowledge sharing among complex team structures
  • Preemptive issue identification, root cause analysis, and faster incident resolution
  • Better alignment of IT teams and enablers with business goals
  • Customer-centric services
  • Better overall processes with less waste

Common ITSM practices

ITSM functions on a collection of practices and processes that define and standardize the IT services that your organization offers. These core processes rely on agreed IT service level agreements (SLAs) and key performance indicators (KPIs) that define the overall service delivery.

Some key ITSM processes include:

Change management

The change management practice ensures that IT teams use standard procedures to handle all changes to infrastructure and systems while following compliance and regulatory standards.

Any changes requested are peer-reviewed, analyzed, and scheduled to ensure minimal disruptions to IT services and business operations.

Incident management

Considered one of the most critical IT policies, the incident management practice involves responding to unplanned events and service disruptions as quickly as possible. Each of such incident’s response and resolution is attended to as per pre-defined service restoration SLAs, which help to measure overall business impact during a given period.

IT asset management (ITAM)

Considered the backbone of all service management processes, IT asset management is the end-to-end process of managing the lifecycle of all assets, both hardware and software, in your organization. This practice:

  • Ensures an up-to-date record of IT assets
  • Monitors asset deployment, usage, upgrades
  • Subsequently marks assets for disposal when due

(Learn more about software asset & hardware asset management.)

Problem management

Problem management involves identifying and addressing underlying issues to avoid incidents. Root cause analysis (RCA) and Action Items are the core modules that prevent the recurrence of incidents to cause service disruption.

(See how problem management can become proactive.)

Service request management

One of the most widely used processes of ITSM, service request management is the practice responsible for handling and maintaining records of all minor, non-urgent user requests such as:

  • Access requests
  • Software upgrades
  • Password reset
  • Hardware improvements
  • Etc.

Knowledge management

The knowledge management practice is responsible for creating, handling, and distributing information assets both within an organization and externally to meet business goals.

(Learn about Knowledge-Centered Service.)

Service management frameworks & the IT service lifecycle

ITSM frameworks are formalized guidelines of best practices and standard processes to implement and manage IT-based services. Some of these standards include:

  • ISO 20000
  • COBIT (Control Objectives for Information and Related Technologies)
  • The Business Process Framework (eTOM)
  • DevOps
  • ITIL®

While such frameworks overlap to achieve enhanced efficiency across the various verticals of your organization, the ITIL framework is the most globally recognized. ITIL focuses specifically on an IT service’s lifecycle by fragmenting it into a sequence of five stages:

  1. Service Strategy
  2. Service Design
  3. Transition
  4. Operations
  5. Continuous Service Improvement

At each stage, ITIL enables a systemic process for efficiently managing the stage in itself and then transitioning to the next one. Of the five stages, Service Operations is the one that continues throughout the lifecycle of IT operations. It is responsible for coordinating and executing activities and processes that are necessary to deliver IT services.

How IT operations management (ITOM) works

Where service management focuses on the delivery of IT services, IT operations management is a set of administrative procedures that manage all the components of your organization’s IT infrastructure. In today’s digital age—that’s a lot of responsibility.

With operations management, organizations can administer defined processes for IT infrastructure provisioning, capacity, performance monitoring, storage, and availability management.

Benefits of a well-structured ITOM practice include:

  • Improved levels of service availability
  • Better customer and user experience
  • Minimal downtime
  • Lowered costs of operations due to regular component monitoring
  • Highly optimized service delivery
  • Robust foundation for digital transformation and DevOps

ITOM Functions

IT operations management focuses on three main aspects of infrastructure management:

Network

One of the core areas of IT infrastructure is undoubtedly network operations. To help with this, ITOM provisions various procedures to manage all internal and external network communications. This also includes managing network security of internal telephony and handheld devices to ensure only authorized users can remotely access the organizational network.

The scope operations management also encompasses handling communication with external servers through efficient port and protocol management.

Enterprise Network

Hardware, servers & workplace devices

This function involves provisioning, configuring, managing, and troubleshooting the server, including any related components and workplace assets, such as laptops, mobile phones, and tablets.

Here, the operations team is responsible for:

  • Maintaining server uptime
  • Handling server upgrades and patches
  • Managing devices

Though certain organizations scope of their operations differently, many companies include managing data storage, email, and file server setup under the server management umbrella.

Help desk operations

Tasks grouped under the help desk generally involve offering first level support that includes:

  • Provisioning users
  • Input for configuration audits
  • Backup requests
  • Facilities management
  • Etc.

In some organizations, help desk operations also involves administrative phases including disaster recovery management and IT infrastructure library maintenance.

(Compare technical support to customer service.)

Merging operations & service management

ITSM and ITOM are crucial parts of IT management that ensure consistent and efficient IT service delivery. Although both of their functions overlap, we can clearly distinguish the two:

  • Service management (ITSM) offers a comprehensive approach to delivering IT services.
  • Operations management (ITOM) handles the tools, environment, and processes for operating such services.

Fundamentally, though, ITOM operates on Service Operations Guidelines, a subset of overall ITSM, as the name implies. These guidelines standardize the management of recurring service operations by monitoring and controlling all tangible and non-tangible components of an organization’s infrastructure.

Historically, operations management service management were two separate functions, with different teams, structures, and areas of responsibility. Today, however, the trend is towards a merging of the two, as technology drives every aspect of business, it’s increasingly challenging to separate successful service delivery from all the necessary underlying practices and technology that supports it.

With the right tools and processes, organizations tend to benefit from:

  • Increased server availability
  • Reduced operational risk
  • Improved customer satisfaction

Additionally, organizations that are looking to embrace efficient models such as DevOps, implementing ITSM standards are always a step closer to achieving enhanced operational efficiency and workflow automation.

Related reading

]]>
Service-Oriented Architecture vs Microservices Architecture: Comparing SOA to MSA https://www.bmc.com/blogs/microservices-vs-soa-whats-difference/ Mon, 14 Jun 2021 08:43:33 +0000 http://www.bmc.com/blogs/?p=10634 In computing, a service refers to a single or collective units of software that perform repetitive, redundant tasks. In the era of cloud computing, applications are composed of a collection of services that collectively perform various functions to support the application’s overall functionality. In this article, we explore Microservices Architecture (MSA) and Service-Oriented Architecture (SOA) […]]]>

In computing, a service refers to a single or collective units of software that perform repetitive, redundant tasks. In the era of cloud computing, applications are composed of a collection of services that collectively perform various functions to support the application’s overall functionality.

In this article, we explore Microservices Architecture (MSA) and Service-Oriented Architecture (SOA) as two common service-based architectures—how they both rely on services as the main component and how they differ in terms of service characteristics.

Let’s take a look.

What is a service-oriented architecture (SOA)?

A service-oriented architecture follows a design of multiple self-contained, discrete, and repeatable services that are collectively used to form a service mesh of an application’s functionalities holistically.

This enables a framework of application components to interact and offer services with other components by leveraging a service interface (communication protocol).

SOA design principles

The principles of a Service Oriented Architecture may differ depending on your use case. Here are some common principles that segregate services to form an SOA:

  • Abstraction
  • Reusability
  • Granularity
  • Standardized contract
  • Autonomy
  • Statelessness
  • Discovery

Features of SOA

One fundamental use case of an SOA is to allow you to build an application by using multiple distinct services collectively, where each service consists of a unique business or application logic.

Other than that, some common features of SOA include:

  • “Share as much as possible” architecture
  • Importance on business functionality reuse
  • Common governance and standards
  • Enterprise service bus (ESB) for communication
  • Multiple message protocols
  • Common platform for all services deployed to it
  • Multi-threaded with more overheads to handle I/O
  • Maximum application service reusability
  • More likely to use traditional relational databases
  • Not preferred in a DevOps model

What is a microservice architecture (MSA)?

A microservice architecture, often known as microservices, follows an SOA pattern by breaking a single application into multiple loosely coupled, independent services yet working with each other.

Often considered the perfect use case of containerization, microservices are fairly routine for organizations to deploy each of such micro-services on separate containers. This enables an efficient framework of multiple services that are flexible, portable, and platform-agnostic—allowing each service to have different operating systems and databases while running in its own process.

Features of MSA

  • “Share as little as possible” architecture
  • Importance on the concept of bounded context
  • Relaxed governance, with more focus on people
  • Efficient collaboration and freedom in choosing platform and technologies
  • Simple, less elaborate messaging system
  • Lightweight protocols such as HTTP/REST and AMQP
  • Single-threaded usually with the use of Event Loop features for non-locking I/O handling
  • Containers work very well in MSA and are considered perfect for a DevOps model
  • More focused on decoupling
  • Uses modern, non-relational databases

Microservices vs SOA: key differences

Microservices vs SOA: key differences

Let’s look at the key differences between SOA and MSA.

Coordination

SOA requires coordination with multiple groups to create business requests.

On the contrary, there is little or no coordination among services in an MSA. In the event coordination is needed among service owners, it is done through small application development teams, and services can be quickly developed, tested, and deployed.

Service granularity

The prefix “micro” in microservices refers to the granularity of its internal components. Service components within MSA are generally single-purpose services that do one thing really well.

Services in SOA usually include much more business functionality and are often implemented as complete subsystems.

Component sharing

SOA enhances component sharing, whereas MSA tries to minimize sharing through “bounded context.” A bounded context refers to the coupling of a component and its data as a single unit with minimal dependencies.

As SOA relies on multiple services to fulfill a business request, systems built on SOA are likely to be slower than MSA.

Middleware vs API layer

The messaging middleware in SOA offers a host of additional capabilities not found in MSA, including:

  • Mediation and routing
  • Message enhancement
  • Message and protocol transformation

MSA has an API layer between services and service consumers.

Remote services

SOA architectures rely on messaging (AMQP, MSMQ) and SOAP as primary remote access protocols.

Most MSAs rely on two protocols—REST and simple messaging (JMS, MSMQ)—and the protocol found in MSA is usually homogeneous.

Heterogeneous interoperability

SOA promotes the propagation of multiple heterogeneous protocols through its messaging middleware component. MSA attempts to simplify the architecture pattern by reducing the number of choices for integration.

  • If you would like to integrate several systems using different protocols in a heterogeneous environment, you need to consider SOA.
  • If all your services could be exposed and accessed through the same remote access protocol, then MSA is a better option.

Contract decoupling

Contract decoupling is the holy grail of abstraction. It offers the greatest degree of decoupling between services and consumers. It is one of the fundamental capabilities offered within SOA—but MSA doesn’t support contract decoupling.

Which architecture to choose?

Here are a few key considerations when opting for either of the patterns:

  • SOA is better suited for large and complex business application environments that require integration with many heterogeneous applications. However, workflow-based applications that have a well-defined processing flow are a bit difficult to implement using SOA patterns. Small applications are also not a good fit for SOA as they don’t need a messaging middleware component.
  • The MSA pattern is well suited for smaller and well partitioned web-based systems. The lack of messaging middleware is one of the key factors that make MSA unfit for complex environments.
  • Control vs orchestration. When developing an application from scratch, using MSA is considered a pragmatic choice as it offers greater control as a developer. On the other hand, if the goal is to orchestrate business processes, SOA is considered idle as it provides the right framework.
  • Early-stage vs more mature organizations. Businesses that are in their early stages might find MSA as an ideal choice. As the business grows, organizations may require capabilities such as complex request transformation and heterogeneous systems integration. In such situations, organizations often turn to the SOA pattern to replace MSA.

Both SOA and MSA follow an identical pattern of services at different layers of an enterprise. The existence of MSA comes down to the success of the SOA pattern and is therefore often referred as a subset of SOA.

While both Microservices and a Service-Oriented Architecture functions entirely on breaking an application into multiple services, an MSA disaggregates services on an application level, while an SOA does so on an enterprise-level service-reusability.

Related reading

Original reference images:

Microservices vs SOA: key differences

]]>