Shanika Wickramasinghe – BMC Software | Blogs https://s7280.pcdn.co Wed, 20 Sep 2023 10:57:39 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Shanika Wickramasinghe – BMC Software | Blogs https://s7280.pcdn.co 32 32 What Is Terraform? Terraform & Its IaC Role Explained https://s7280.pcdn.co/terraform/ Tue, 29 Mar 2022 13:36:59 +0000 https://www.bmc.com/blogs/?p=51908 Managing infrastructure is a core requirement for most modern applications. Even in PaaS or serverless environments, there will still be components that require user intervention for customization and management. With the ever-increasing complexity of software applications, more and more infrastructure modifications are required to facilitate the functionality of the software. It is unable to keep […]]]>

Managing infrastructure is a core requirement for most modern applications. Even in PaaS or serverless environments, there will still be components that require user intervention for customization and management. With the ever-increasing complexity of software applications, more and more infrastructure modifications are required to facilitate the functionality of the software.

It is unable to keep up with the rapid development cycles with manual infrastructure management. It will create bottlenecks leading to delays in the delivery process.

Infrastructure as Code (IaC) has become the solution to this issue—allowing users to align infrastructure changes with development. It also facilitates faster automated repeatable changes by codifying all the infrastructure and configuration and managing them through the delivery pipeline.

Terraform is one of the leading platform agnostic IaC tools that allow users to define and manage infrastructure as code. In this article, let’s dig into what Terraform is and how we can utilize it to manage infrastructure at scale.

What is Infrastructure as Code?

Before moving into Terraform, we need to understand Infrastructure as Code. To put it simply, IaC enables users to codify their infrastructure. It allows users to:

  • Create repeatable version-controlled configurations
  • Integrate them as a part of the CI/CD pipeline
  • Automate the infrastructure management

If an infrastructure change is needed in a more traditional delivery pipeline, the infrastructure team will have to be informed. The delivery pipeline cannot proceed until the change is made to the environment. Having an inflexible manual process will hinder the overall efficiency of the SDLC with practices like DevOps leading to fast yet flexible delivery pipelines.

IaC allows infrastructure changes to be managed through a source control mechanism like Git and integrated as an automated part of the CI/CD pipeline. It not only automates infrastructure changes but also facilitates auditable changes and easy rollbacks of changes if needed.

What is Terraform?

Terraform is an open-source infrastructure as a code tool from HashiCorp. It allows users to define both on-premises and cloud resources in human-readable configuration files that can be easily versioned, reused, and shared. Terraform can be used to manage both low-level components (like compute, storage, and networking resources) as well as high-level resources (DNS, PaaS, and SaaS components).

Terraform is a declarative tool further simplifying the user experience by allowing users to specify the expected state of resources without the need to specify the exact steps to achieve the desired state of resources. Terraform manages how the infrastructure needs to be modified to achieve the desired result.

Terraform is a platform-agnostic tool, meaning that it can be used across any supported provider. Terraform accomplishes this by interacting with the APIs of cloud providers. When a configuration is done through Terraform, it will communicate with the necessary platform via the API and ensure the defined changes are carried out in the targeted platform. With more than 1,700 providers from HasiCorp and the Terraform community available with the Terraform Registry, users can configure resources from leading cloud providers like Azure, AWS, GCP, and Oracle Cloud to more domain-specific platforms like Cloudflare, Dynatrace, elastic stack, datadog, and Kubernetes.

The Terraform workflow

The Terraform workflow is one of the simplest workflows only consisting of three steps to manage any type of infrastructure. It provides users the flexibility to change the workflow to support their exact implementation needs.

Terraform Workflow

1. Write

The first stage of the workflow is where users create the configurations to define or modify the underlying resources. It can be as simple as provisioning a simple compute instance in a cloud provider to deploy a multi-cloud Kubernetes cluster. This writing part can be facilitated either through HasiCorp Configuration Language (HCL), the default language to define resources or using the Cloud Development Kit for Terraform (CDKTF) which allows users to define resources using any supported common programming languages like Python, C#, Go, and Typescript.

2. Plan

This is the second stage of the workflow where Terraform will look at the configuration files and create an execution plan. It enables users to see the exact charges that will happen to the underlying infrastructure from what new resources are getting created, resourced, modified, and deleted.

3. Apply

This is the final stage of the workflow which takes place if the plan is satisfactory once the user has confirmed the changes. Terraform will carry out the changes to achieve the desired state in a specific order respecting all the resource dependencies. It will happen regardless of whether you have defined dependencies in the configuration. Terraform will automatically identify the resource dependencies of the platform and execute the changes without causing issues.

Terraform uses the state to keep track of all the changes to the infrastructure and detect config drifts. It will create a state file at the initial execution and subsequently update the state file with new changes. This state file can be stored locally or in a remote-backed system like an s3 bucket. Terraform always references this state file to identify the resources it manages and keep track of the changes to the infrastructure.

Benefits of Terraform

Let’s look at why so many people appreciate Terraform

  • Declarative nature. A declarative tool allows users to specify the end state and the IaC tools will automatically carry out the necessary steps to achieve the user configuration. It is in contrast to other imperative IaC tools where users need to define the exact steps required to achieve the desired state.
  • Platform agnostics. Most IaC tools like AWS CloudFormation and Azure Resource templates are platform specific. Yet, Terraform allows users to use a single tool to manage infrastructure across platforms with applications using many tools, platforms, and multi-cloud architectures.
  • Reusable configurations. Terraform encourages the creation of reusable configurations where users can use the same configuration to provision multiple environments. Additionally Terraform allows creating reusable components within the configuration files with modules.
  • Managed state. With state files keeping track of all the changes in the environment, all modifications are recorded and any unnecessary changes will not occur unless explicitly specified by the user. It can be further automated to detect any config drifts and automatically fix the drift to ensure the desired state is met at all times.
  • Easy rollsbacks. As all configurations are version controlled and the state is managed, users can easily and safely roll back most infrastructure configurations without complicated reconfigurations.
  • Integration to CI/CD. While IaC can be integrated into any pipeline, Terraform provides a simple three-step workflow that can be easily integrated into any CI/CD pipeline. It helps to completely automate the infrastructure management.

(Learn how to set up a CI/CD pipeline.)

How to use Terraform

You can start using Terraform by simply installing it in your local environment. Terraform supports Windows, Linux, and macOS environments. It provides users the option to install manually using a pre-compiled binary, or use a package manager like Homebrew on Mac, Chocolatey on Windows, Apt/Yum on Linux. It offers users the flexibility to install Terraform in their environments and integrate it into their workflows.

HashiCorp also provides a managed solution called Terraform Cloud. It provides users with a platform to manage infrastructure on all supported providers without the hassle of installing or managing Terraform itself. Terraform Cloud consists of features like;

  • Remote encrypted state storage
  • Direct CI/CD integrations
  • Fully remote and SOC2 compliant collaborative environment
  • Version Controls
  • Private Registry to store module and Policy as Code support to configure security and compliance policies
  • Complete auditable environment.
  • Cost estimations before applying infrastructure changes in supported providers.

Additionally, Terraform Cloud is deeply integrated with other HasiCrop Cloud Platform services like Vault, Consul, and Packer to manage secrets, provide service mesh and create images. All these things allow users to manage their entire infrastructure using the HasiCorp platform.

Using Terraform to provision resources

Finally, let’s look at a simple Terraform configuration. Assume you want to deploy a web server instance in your AWS environment. It can be done by creating an HCL configuration similar to the following.

terraform {

required_providers {

aws = {
source  = "hashicorp/aws"
version = "~> 3.74"
}
}
}
# Specifiy the Provider
provider "aws" {
region  = var.region
# AWS Credentials
access_key = "xxxxxxxxxxxxx"
secret_key = "yyyyyyyyyyyyy"
default_tags {
tags = {
Env            = "web-server"
Resource_Group = "ec2-instances"
}
}
}

# Configure the Security Group

resource "aws_security_group" "web_server_access" {
name        = "server-access-control-sg"
description = "Allow Access to the Server"
vpc_id      = local.ftp_vpc_id
ingress {

from_port        = 22
to_port          = 22
protocol         = "tcp"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {

from_port        = 443
to_port          = 443
protocol         = "tcp"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port        = 0
to_port          = 0
protocol         = "-1"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]

}

tags = {
Name     = "server-access-control-sg"
}
}

# Get the latest Ubuntu AMI

data "aws_ami" "ubuntu" {
most_recent = true
owners      = ["099720109477"] # Canonical
filter {
name   = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {

name   = "virtualization-type"
values = ["hvm"]
}
}

# Elastic IP

resource "aws_eip" "web_server_eip" {
instance = aws_instance.web_server.id
vpc      = true
tags = {
Name     = "web-server-eip"
Imported = false
}
}

# Web Server Instance

resource "aws_instance" "web_server" {

ami                         = data.aws_ami.ubuntu.id
instance_type               = "t3a.small"
availability_zone           = "eu-central-1a"
subnet_id                   = "subnet-yyyyyy"
associate_public_ip_address = false
vpc_security_group_ids      = "sg-xxxxxxx"
key_name                    = "frankfurt-test-servers-common"
disable_api_termination     = true
monitoring                  = true
credit_specification {
cpu_credits = "standard"
}

root_block_device {
volume_size = 30
}

tags = {
Name     = "web-server"
}
}

In the HCL file, we are pointing to the AWS provider and providing the AWS credentials (Access Key and Secret Key) which will be used to communicate with AWS and provision resources.

We have created a security group, elastic IP, and ec2 instance with the necessary configuration options to obtain the desired state in the configuration itself. Additionally, the AMI used for the ec2 instance is also queried by the configuration itself by looking for the latest Ubuntu image. Its easily understandable syntax pattern allows users to easily define their desired configurations using HCL and execute them via Terraform. You can have an in-depth look at all the available options for the AWS provider in the Terraform documentation.

Terraform summary

Terraform is a powerful IaC tool that aims to provide the best balance between user friendliness and features. Its declarative and platform-agnostic nature allows this tool to be used in any supported environment without being vendor-locked or having to learn new platform-specific tools. The flexible workflow and configuration options of Terraform allow it to be run in local environments.

Furthermore, users have the flexibility to select the exact implementation suited for their needs to manage Terraform Cloud solutions. All this has led Terraform to become one of the leading IaC tools.

Related reading

]]>
NodeJS vs Python: When & How To Use Both https://www.bmc.com/blogs/python-vs-nodejs/ Wed, 02 Mar 2022 15:10:57 +0000 https://www.bmc.com/blogs/?p=51799 NodeJS and Python are two of the most popular technologies for application development. Python is one of the widely adopted programming languages, facilitating developments in many areas. On the other hand, NodeJS is a runtime environment. Both are excellent for their intended purposes with overlapping use cases. In this post, we will dig into Python […]]]>

NodeJS and Python are two of the most popular technologies for application development. Python is one of the widely adopted programming languages, facilitating developments in many areas. On the other hand, NodeJS is a runtime environment.

Both are excellent for their intended purposes with overlapping use cases. In this post, we will dig into Python and NodeJS to understand the similarities and differences between the two technologies.

What is Python?

PhytonPython is an open-source, high-level, and dynamic programing language. Python is a general-purpose language, meaning that it’s not specialized for a specific area or task. It can be used for any development purpose, from building websites and software, automation to data analytics and machine learning, etc.

This flexibility and user-friendliness have made Python one of the leading programming languages.

Advantages of Python

  • Versatility. As a general-purpose language, Python can be used to accommodate a wide variety of programming needs, from simple scripting to machine learning.
  • Ease of use. Python is one of the simpler languages to learn, with a low barrier to entry while offering all its powerful capabilities.
  • Echo-system. Python has thousands of libraries and frameworks to facilitate any kind of functionality. Thus, you can easily find packages to extend the functionality of Python. The best part is that all these libraries and frameworks can be easily installed via the Python package manager called pip.
  • Extensibility. Python can be easily integrated with other languages such as C, C++, and Java. It helps to utilize the functionality of Python within programs developed using other languages.
  • Cross-platform support. Programs can be run on any operating system, including Windows, Linux, and macOS.
  • GUI support. Unlike some other languages, Python has multiple fully developed GUI frameworks like Tkinter and Pygame to create GUI applications.

What is NodeJS?

NodejsNodeJS is a single-threaded, open-source JavaScript runtime environment that enables developers to build scalable server-side applications. Node is built on the open-source V8 JS runtime engine and written in C, C++, and JavaScript.

The main difference between NodeJS and Python is that Python is a fully flagged programming language while Node is a runtime environment designed to run JavaScript outside the browser.

Advantages of NodeJS

  • Simplicity. Since NodeJS uses the popular JavaScript language as the base, developers can easily use it in their applications and use JavaScript for both client-side and server-side developments.
  • Scalability. The single-threaded nature of NodeJS helps to scale NodeJS applications easily by enabling it to handle a large number of simultaneous connections with high throughput.
  • Ecosystem. NPM offers thousands of packages to extend the functionality of NodeJS.
  • Speed & efficiency. NodeJS can run relatively faster than other tools and runtimes as it is developed using C and C++. When this speed is coupled with the scalability of its runtime, the speed of NodeJS applications is increased further.
  • Multi-platform. Node has cross-platform support allowing users to develop web, desktop, and mobile applications.

Comparing NodeJS vs Python

Now that we understand the basics of Python and NodeJS, let’s compare them to identify the intricacies of this programming language and runtime environment.

Use cases

The first thing to compare is the use cases. While both NodeJS and Python are excellent back-end technologies, they have many ways to use them:

Node is ideal for scalable application developments, especially when dealing with real-time data and event-driven architectures. The features and speed of Node have made its runtime an excellent choice to power REST APIs, IoT, single-page applications, data streaming, etc. Additionally, NodeJS can also be used to create desktop and mobile applications with tools like Electron, Ionic, and Flutter.

As a general-purpose language, Python can be used for virtually any kind of development. These developments range from developing desktop applications to web applications using frameworks like Flask, Django, and Pyramid. As a scripting language, Python can be used to add additional functionality to software developed using other programming languages and as a language to create automation scripts.

Additionally, Python has gained immense popularity with data science as one of the leading languages for data analytics, machine learning, neural networks, and artificial intelligence projects. Even though mobile development support is one area that Python lacks, frameworks like Kivy and Beeware can be used for mobile development.

Importantly, however, Python lacks features and tools when compared to other options such as React Native and Flutter.

Architecture

Good architecture is vital for any software application or tool to function properly in an efficient manager. Architecture defines the underlying behavior, components, and the relationships between components.

NodeJS is based on a single-threaded event loop model to handle multiple client requests simultaneously. Its architecture is designed to reduce resource usage, leading to relatively lightweight processes with fast executions. The non-blocking nature of NodeJS also allows handling multiple concurrent connections.

Python converts its code into bytecode and later machine code using an interpreter. This approach leads to slow code execution times compared to other languages. However, there are new interpreters like PyPy that increase the speed of Python as an alternative to the default CPython.

Python also does not support multi-threading—the underlying CPython interpreter does not support true multi-core execution via multi-threading. However, it does not limit the functionality of Python as libraries like Asyncio can be used to build asynchronous applications.

Performance

Speed, scalability, and efficiency are key parameters when considering the overall performance of any tool or service. A faster and more efficient platform will lead to more stable and responsive applications.

NodeJS executes its code outside of the constraints of the browser, allowing it to be faster and more resource efficient. The non-blocking nature of the architecture allows increasing the speed further.

Node applications can easily scale up or down depending on the application architecture and requirements. Moreover, NodeJS can easily facilitate scalable architectures with fast execution times as well as lightweight communication between each process.

Python is slower than NodeJS as an interpreted language. As Python does not support multi-threading natively, the scalability of Python applications can be limited compared to NodeJS. The Python interpreter is unable to execute multiple tasks simultaneously. However, there are implementations like the PyPy, which is a new interpreter that increases speed. Additionally, there are features like Stackless Python to integrate thread-based programming using Python.

Extensibility

The ability to extend functionality outside of the core capabilities is crucial when deciding on a tool for development. Extensibility without impacting the existing features or functions and having an extensive echo system are key pillars to enable extensibility. Both NodeJS and Python have excellent extensibility options.

NodeJS can be easily extended and integrated with various packages and tools. Node package manager (NPM) provides developers access to thousands of packages to add new capabilities to an application. NPM has the largest open-source package library with over a million packages.

NodeJS also provides an inbuilt API for developing HTTP and DNS servers. Furthermore, frameworks like React, Vue, and Angular allow developers to create web applications easily.

Python also has an extensive package library that allows developers to add new functionality to Python via its pip repositories. It features an extensive list of frameworks from web development to data analytics and machine learning. Here, the extensibility of Python plays a key role as it can be easily integrated with other programming languages.

A good example of this is to use Python binding to call functions and pass data from Python to languages like C and C++. It allows developers to take advantage of the strengths of both languages and provides a good solution to overcome the relative slowness of Python.

Ease of use

With straightforward syntax and programming structure, both technologies are easy to learn, particularly compared to other languages like Java, C++, and C#. However, Python has the edge here as it’s much more readable than NodeJS.

Additionally, Python has a slight edge over NodeJS with beginner-friendliness as it is easy to learn and get started.

NodeJS vs Python: Comparison summary

NodeJS vs Python

What to Choose for Your Development?

Both NodeJS and Python are excellent tools for their targeted development use-cases. NodeJS will be ideal if you want a unified runtime environment to create cross-platform applications for web, mobile, and desktop.

However, it does mean Python cannot be used for these types of developments as it is a popular choice for powering many back-end services. Moreover, Python has a clear edge over NodeJS when it comes to other requirements like automation scripting, data analytics, and machine learning. It is also the go-to language for many DevOps and data science projects.

Related reading

]]>
SDK vs API: What’s The Difference? https://www.bmc.com/blogs/sdk-vs-api/ Fri, 18 Feb 2022 10:03:12 +0000 https://www.bmc.com/blogs/?p=51752 Software Development Kits (SDKs) and Application Programming Interfaces (APIs) are two indispensable tools in modern software development. Both aim to enhance and extend the capabilities of the software. However, an SDK and API have different use cases and targets to address a specific concern in software development pipelines. In this article, we will demystify SDKs […]]]>

Software Development Kits (SDKs) and Application Programming Interfaces (APIs) are two indispensable tools in modern software development. Both aim to enhance and extend the capabilities of the software. However, an SDK and API have different use cases and targets to address a specific concern in software development pipelines.

In this article, we will demystify SDKs and APIs to understand what they are, how they can be used, and how they relate to each other.

sdks-and-apis

What is a Software Development Kit?

An SDK is a set of tools, libraries, and programs that can be used to develop applications for a specific platform or service.

Unlike general programming languages that allow users to develop software for any supported platform, SDKs enable users to develop platform-specific software utilizing all the features and functionality of those platforms. Most SDKs are language agnostic, meaning there will be different SDKs for different programming languages for the same platform.

Typically, an SDK includes the following resources:

  • Code libraries are functions that allow users to interact with the underlying platform and utilize its capabilities
  • Compiler translates the programming language code and converts source code to object code.
  • Debuggers help identify platform or service-specific issues.
  • Documentation includes usage instructions, how-to guides, best practices, and code samples to aid in your development.

What an SDK brings is simplicity. Developers can simply download and install an SDK and start developing for the specified platforms via their integrated development environments. APIs can also be a part of an SDK to bring interfacing functionality.

Common SDKs: Examples & usage

  • Google’s Android SDK and Apples’ iOS SDKs for mobile application development for Android and iOS platforms.
  • Microsoft’s .NET SDK and macOS SDK for desktop application development for Microsoft Windows and Apple macOS.
  • Amazon Web Services SDK, Microsoft Azure SDK, and Google Cloud SDK for the development of their respective cloud platforms.
  • OpenAI SDK, Qualcomm Neural Processing SDK for AI, and Vertex AI for developing AI applications.
  • Stripe SDK and Paypal SDK to integrate payment services into your applications.

Benefits of software development kits

  • Direct access to the functionality and features of the SDK platform and the ability to use them within your application.
  • Straightforward integration and faster development. The SDK provides all the required integration and development libraries.
  • No reinventing the wheel. SDK eliminates the need for developers to create functions from scratch to interact with a platform as they can simply call upon the SDK for that, drastically reducing the development time.
  • Efficient resourcing and cost. This shorter and faster development cycle leads to efficient resource management and reduced development costs.
  • Developer’s choice. Developers can use their preferred language for the development as most platforms provide SDKs for different programming languages.
  • Great support. SDKs come with documentation and code samples, allowing developers to easily learn how to use them. Additionally, you can easily find answers to development matters with a simple search query as most common SDKs have a wide community.

What is an Application Programming Interface?

Application Programming Interface or API is an interface that facilitates communication between two platforms.

The primary goal of an API is to standardize the way third parties interact with a piece of software—without using custom connectors or integrations. An API allows external parties to utilize or integrate the services provided by the software in their applications.

The Application Programming Interface can dictate what kind of functionality is exposed and what information can be exchanged between the underlying software or service and the API user (the third party who consumes the API). Typically, an API will consist of two components.

  • API. The interface that facilitates communication and data sharing.
  • Documentation. Information on how to utilize the API, endpoints, information on authentication and authorization, data exposed, etc.

There are different kinds of APIs depending on the use case and functionally. Some common types of API architecture are:

  • Web API. These are interfaces used to interact with standard web components like web browsers, devices, or custom services.
  • REST API. Utilize the REST architectural style to facilitate the API and can be used with XML, JSON, and plain text. These APIs are also known as RESTful APIs. They have become the common choice for most web applications and rely on HTTP/S for communications.
  • SOAP API. This highly extensible standard uses XML to provide messaging services and facilitate communication. While SOAP can be more complex than REST, it provides heightened data security, privacy, and transport-independent API implementation.
  • RPC. These types of APIs are used to invoke actions or processes and get the desired outcome. Typically, these APIs will accept an API call with some parameters, carry out an action, and return the result. RPC can use JSON or XML, and they are also known as JSON-RPC or XML-RPC within development communities.

How APIs work: basic functionality

The goal of any API is to facilitate communication between two different platforms or services.

Assume you have an online booking platform for hotels. You need to know the availability of rooms with different hotel providers using this platform. As bookings happen in real-time across multiple platforms, you must have an updated inventory of available rooms at all times to facilitate a smooth booking process for your end-users.

This is where APIs come into play. Each hotel provider will have an API that exposes the availability of their properties. You can query these API endpoints through your booking platform when an end-user requests a booking to a particular property. The API will then return the availability and any additional information requested by the booking platform. Then your platform will again utilize the API to confirm the booking and inform the hotel provider about the booking so that he can update his internal systems.

The workflow of an API

Let’s break it down:

  1. A client application requests a specific API endpoint to carry out a specific functionality to obtain information.
  2. The API captures this request. Then it checks for authentication and authorization and relays the request with any parameters to the internal platform.
  3. Depending on the request type, the platform will process the request and communicate the result back to the client application which sent the request. For example, if it is a data request, the platform will send back the requested data, or if the request is to carry out a specific function, it will return the result of that specific function.

api-workflow

Common API use cases

  • Mapping and weather APIs add customized maps and weather predictions to software applications or websites. Examples include Google Maps API, OpenStreetMap API, and OpenWeatherMap API.
  • Payment APIs to facilitate payments across multiple providers. Examples: Stripe, Square, PayPal, KeyPay, and Bank APIs.
  • Scientific Services like Open Science Framework API to access open-source research projects and data.
  • Internal APIs facilitate communication between different components of an application. This is especially the case with distributed architectures like microservices, where smaller individual components within an application use APIs to facilitate communication between them.

(Understand how APIs & microservices work together.)

Benefits of Application Programming Interfaces

  • Fast integration. Provide fast integration between different software and services.
  • Secure without customization. Securely exposing platform capabilities and data without requiring custom integrations.
  • Enables distributed architectures. The ability to facilitate distributed software architectures where internal communications are handled via APIs.
  • Increased productivity in the development cycles with high reusability offered by APIs.
  • Reduced costs as no custom development efforts are required to interact with API endpoints.
  • Easy analytics. Easy integration of reporting and data analytics functionality through data queried via APIs.
  • Platform agnostic. Any type of service, device, or platform can utilize APIs as they are platform-agnostic.

Choosing between SDK or APIs?

There is no need to select only one between SDKs and APIs for your development. In fact, both of them are essential for all modern applications to facilitate core services through your application. As mentioned:

  • An SDK provides a complete development kit for software development for building applications for a specified platform, service, or language.
  • An API is used to facilitate communication between two platforms.

Developers can create their applications using an SDK and use APIs to integrate with third-party platforms or services to bring additional functionality to the application. SDKs themselves will include APIs that facilitate interactions with the targeted platform.

Additionally, Software Development Kits can be used to create APIs that enable external parties to interface with your application.

SDKs & APIs are integral to software development

SDKs and APIs have become integral parts of modern software development. With the ever-increasing complexity of development requirements, SDKs and APIs aim to simplify the development process by offering the necessary tools to develop applications while utilizing the capabilities of the targeted platforms and services.

In this cloud-first world, both these tools have become invaluable for developing applications across different platforms and integrating with various services.

Related reading

]]>
How To Run Self-Hosted Azure DevOps Build/Release Agents https://www.bmc.com/blogs/azure-devops-build-release-agents/ Fri, 28 Jan 2022 00:00:50 +0000 https://www.bmc.com/blogs/?p=13205 Microsoft Azure—Azure for short—is the Microsoft cloud services platform spanning IaaS, PaaS, and SaaS services from simple virtualized infrastructure to data warehousing, ML, and AI platforms. The Azure DevOps service is one such SaaS offering that offers a fully featured DevOps platform consisting of: Azure Boards (Planning and Management of the Project) Azure Pipelines (CI/CD […]]]>

Microsoft Azure—Azure for short—is the Microsoft cloud services platform spanning IaaS, PaaS, and SaaS services from simple virtualized infrastructure to data warehousing, ML, and AI platforms.

The Azure DevOps service is one such SaaS offering that offers a fully featured DevOps platform consisting of:

  • Azure Boards (Planning and Management of the Project)
  • Azure Pipelines (CI/CD Pipeline)
  • Azure Repos (Cloud-hosted private Git Repositories)
  • Azure Test Plans (Manual and Exploratory testing tools)
  • Azure Artifacts (Artifact Storage)

These platforms are augmented by a vast collection of extensions to integrate third-party tools and platforms and extend the functionality. CI/CD pipeline is one of the core components to power a software development process provided by the Azure Pipelines service. Azure Pipelines provides the option to utilize Microsoft-hosted or self-hosted agents to run CI/CD jobs.

In this article, we will look at how to configure self-hosted agents to be utilized in an Azure pipeline.

(New to Azure DevOps? Start with our beginner’s guide.)

Why do we need self-hosted agents?

While it may seem a bit strange to utilize a self-hosted agent with a cloud-based service, there are some significant benefits of opting to go with a self-hosted agent.

One reason is cost. Microsoft does offer:

  • One free Microsoft-hosted job with 1,800 minutes
  • One self-hosted job with unlimited minutes

Though it may be sufficient for small-scale development, most users will inevitably need more flexibility to run multiple concurrent builds and releases. At the time of this article’s writing, a Microsoft-hosted agent will cost $40 USD per agent while a self-hosted agent will cost only $15, both with unlimited minutes. Thus, the self-hosted option provides cost savings when you need to scale up even with the added management overhead.

The second reason for self-hosting is customizability, which offers you the freedom to run the agent on any supported operating system, including Windows, Linux, and macOS. Even though Microsoft hosted agents allow users to select a specific image type, they are limited to what is available from Microsoft.

Additionally, agents can be configured as containers for further flexibility and can even run multiple agents on a single host to maximize resource usage.

Running your self-hosted agent

Setting up and running a self-hosted agent is a relatively simple process, with the primary requirement being running the correct agent for the specified operating system and underlying architecture. In this section, we will see how to run agents on a Windows and a Linux VM.

Creating a Personal Access Token (PAT)

The first step before setting up an agent is to create a personal access token which will be used to connect the agent to the Azure Pipeline.

Step 1. Login to Azure DevOps organization, open user settings, and select “Personal access tokens”

azure-devops-organization

Open up your terminal window and change directory to the folder containing the downloaded file. Inflate the file to view the contents.
First, you need to generate the agent configuration file using the interacting configuration generator script.

Step 2. In the Personal Access Tokens screen, click on “New Token” to create a token.

personal-access-tokens-screen

Step 3. Provide a name, expiration date, and the necessary permissions and click on Create to create the PAT.

new-personal-access-token

Note: Ensure that all the correct permissions are granted. Otherwise, you will not be able to initialize the connection. If required, you can configure the agent to have Full access to Azure DevOps.

full-access-to-azure-devops

Step 4. Once the token is generated, securely store it as it will not be accessible later.

success-screen

Installing & configuring the agents

Since we have created the token, we can now move into setting up the agent. Any agent configuration can be obtained via the Pipelines Agent pools section in the organizational settings in the Azure DevOps dashboard.

Obtaining Agent Configuration Instructions

Step 1. Navigate to the Organization Settings and select Agent pools from the Pipeline section.

pipelines

Step 2. Select the Default agent pool. (If needed, select the agent to other available pools or create a new pool and add the agent.)

agent-pools

Step 3. Click on the New Agent option to obtain the agent installation instructions.

new-agent-option

Step 4. Select the desired operating system and system architecture and follow the instructions provided.

system-prerequisites

Windows Installation

Let’s see how to install the agent in Windows 10 on X64 architecture. Please refer to Microsoft’s official Windows agent guide for a complete list of prerequisites and specifications.

Step 1. Download the agent. (It will be downloaded as a zip file.)

Invoke-WebRequest -Uri https://vstsagentpackage.azureedge.net/agent/2.195.1/vsts-agent-win-x64-2.195.1.zip -OutFile vsts-agent-win-x64-2.195.1.zip

invoke-webrequest

Step 2. Extract the downloaded agent to the desired destination. It is recommended that the agent is extracted to a folder named agents in the root of the C drive (C:agents).

# Create directory and navigate to the directory
New-Item -Path "C:" -Name "agents" -ItemType "directory"
Set-Location -Path "C:agents"

# Extract the downloaded zip file
Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory("C:vsts-agent-win-x64-2.195.1.zip", "$PWD")

# Verify the extraction
Get-ChildItem

directory

Step 3. Start the agent configuration by running the following command. (It is recommended to use Elevated Powershell prompt.)

.config.cmd

command

You will be required to enter configuration details such as:

  • Server URL (Azure organizational URL)
  • Authentication type (Here, we have used the previously created authentication token)
  • Agent details, including agent-pool and agent name

Finally, specify whether to configure the agent as a Windows service.

You will be able to see the configured Azure Pipelines Agent if you navigate to the Services section on Windows (services.msc).

services-section

Step 4. Navigate back to the Agent pools in the Organizational settings, and you can see the newly configured agent as the Default pool in the Agents tab.

Default Pool

Linux Installation

Installing and configuring the pipeline agent in Linux is similar to Windows. So, in this section, let’s see how to install the agent in an Ubuntu environment. Full configuration details are available in the Microsoft documentation.

Step 1. Download the agent

wget https://vstsagentpackage.azureedge.net/agent/2.195.1/vsts-agent-linux-x64-2.195.1.tar.gz

download-in-ubuntu-environment

Step 2. Create a folder and extract the downloaded tar.gz file.

# Create directory and navigate to the directory
mkdir agent
cd agent

# Extract the downloaded zip file
tar zxf ~/Downloads/vsts-agent-linux-x64-2.195.1.tar.gz

# Verify the extraction
ls

creation-of-directory

Step 3. Start the agent configuration by running the following command.

./config.sh

azure-pipelines

Similar to Windows configuration, the users will be asked to enter the server details, authentication type, and the authentication token we created earlier. Then configure the agent details, and finally, the user can start the agent by running the run.sh script.

Step 4 (Optional). You can configure the agent to run as a system service using the svc.sh script located in the agent directory. Specify the user and use the install command to configure the service.

sudo ./svc.sh install ubuntu
sudo ./svc.sh start

svc

Step 5. Navigate back to the Agent pools in the Organizational settings and then to the Default pool of the Agents tab to verify that the new Ubuntu agent is added as a self-hosted agent.

default-pool-of-the-agents

Running your self-hosted agent in Docker

Running the agent as a container is another option we can use to run the agent. Both Windows and Linux are supported as container hosts.

In the following section, let’s look at how to create a container image with the Azure pipeline agent and spin up the image as a container. We will be utilizing the Docker Desktop in a Windows environment to create a Linux (Ubuntu) based agent container.

Step 1. Create a folder named dockeragent and then create a Dockerfile within the folder with ubuntu:18.04 as the base image with the required configurations. (The configuration is available via Microsoft documentation.)

FROM ubuntu:18.04
 
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes "true";" > /etc/apt/apt.conf.d/90assumeyes
 
RUN apt-get update && apt-get install -y --no-install-recommends 
    ca-certificates 
    curl 
    jq 
    git 
    iputils-ping 
    libcurl4 
    libicu60 
    libunwind8 
    netcat 
    libssl1.0 
  && rm -rf /var/lib/apt/lists/*
 
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash 
  && rm -rf /var/lib/apt/lists/*
 
ARG TARGETARCH=amd64¬¬
ARG AGENT_VERSION=2.194.0
 
WORKDIR /azp
RUN if [ "$TARGETARCH" = "amd64" ]; then 
      AZP_AGENTPACKAGE_URL=https://vstsagentpackage.azureedge.net/agent/${AGENT_VERSION}/vsts-agent-linux-x64-${AGENT_VERSION}.tar.gz; 
    else 
      AZP_AGENTPACKAGE_URL=https://vstsagentpackage.azureedge.net/agent/${AGENT_VERSION}/vsts-agent-linux-${TARGETARCH}-${AGENT_VERSION}.tar.gz; 
    fi; 
    curl -Ls¬S "$AZP_AGENTPACKAGE_URL" | tar -xz
 
COPY ./start.sh .
RUN chmod +x start.sh
 
ENTRYPOINT [ "./start.sh" ]

Step 2. Create the startup script (start.sh) and put it within the same folder. Ensure that the line endings are configured as Unix-style (LF) line endings.

fi
 
  AZP_TOKEN_FILE=/azp/.token
  echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi

unset AZP_TOKEN
 
if [ -n "$AZP_WORK" ]; then
  mkdir -p "$AZP_WORK"
fi
 
export AGENT_ALLOW_RUNASROOT="1"
 
cleanup() {
  if [ -e config.sh ]; then
    print_header "Cleanup. Removing Azure Pipelines agent..."
 
    # If the agent has some running jobs, the configuration removal process will fail.
    # So, give it some time to finish the job.
    while true; do
      ./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
 
      echo "Retrying in 30 seconds..."
      sleep 30
    done
  fi
}
 
print_header() {¬
  lightcyan='�33[1;36m'
  nocolor='�33[0m'
  echo -e "${lightcyan}$1${nocolor}"
}
 
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
 
source ./env.sh
 
print_header "1. Configuring Azure Pipelines agent..."
 
./config.sh --unattended 
  --agent "${AZP_AGENT_NAME:-$(hostname)}" 
  --url "$AZP_URL" 
  --auth PAT 
  --token $(cat "$AZP_TOKEN_FILE") 
  --pool "${AZP_POOL:-Default}" 
  --work "${AZP_WORK:-_work}" 
  --replace 
  --acceptTeeEula & wait $!
 
print_header "2. Running Azure Pipelines agent..."
 
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
 
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run.sh "$@" & wait $!

Step 3. Build the Image by running the following command in the dockeragent folder.

docker build -t dockeragent:latest .

docker-agent-latest

Step 4. Create a container using the docker run command with the newly created docker image. We can pass environment variables when creating the container. In this instance, we will be passing the server URL (AZP_URL), PAT token (AZP_TOKEN), and agent name (AZP_AGENT_NAME) as variables.

docker run -e AZP_URL=https://dev.azure.com/ -e AZP_TOKEN= -e AZP_AGENT_NAME=docker-agent-01 dockeragent:latest

azure-pipelines-docker

Step 5. We can verify if the container is added as an agent by looking at the Default agent pool in the Azure DevOps dashboard.

default-pool-agent

Self-hosted agents for Azure DevOps

Self-hosted agents in Azure DevOps Pipelines offer cost savings and more flexibility to configure and run build and release agents in any supported environment. These pipeline agents can be utilized to extend the functionality of the CI/CD pipeline from running in bare-metal servers to VMs and even as containers.

Related reading

]]>
Python vs Rust: Choosing Your Language https://www.bmc.com/blogs/python-vs-rust/ Fri, 14 Jan 2022 07:47:08 +0000 https://www.bmc.com/blogs/?p=51483 Python is one of the most ubiquitous and popular programming languages nowadays. It can power anything from simple scripts and web applications to data analytics. On the other hand, Rust is an up-and-coming programming language quickly gaining popularity in the tech community. Both these programming languages offer their own distinct advantages and disadvantages. In this […]]]>

Python is one of the most ubiquitous and popular programming languages nowadays. It can power anything from simple scripts and web applications to data analytics. On the other hand, Rust is an up-and-coming programming language quickly gaining popularity in the tech community.

Both these programming languages offer their own distinct advantages and disadvantages. In this article, let’s compare how each language stacks up against the other and find the preferable language for specific development needs.

What is Python?

Python was first introduced in 1991 by Guido van Rossum. It is a multiparadigm programming language designed to be easily extensible and help users work efficiently.

Python eliminates staples of other programming languages like semicolons and curly brackets while providing a simple programming experience with a simple syntax that increases code readability. Python is considered a more beginner-friendly language due to its simplicity.

Python’s extensibility and versatility allow using it across many domains, from system administration and application development to data analytics, machine learning, and artificial intelligence development.

(See why Python is perfect for big data.)

Advantages of Python

  • Python has a relatively smaller learning curve compared to other languages. It can provide a simpler development experience without compromising functionality. The asynchronous coding style allows developers to easily handle complex coding requirements.
  • A massive collection of libraries and frameworks is available. Python has gained an impressive number of libraries and frameworks due to its maturity and popularity. As a developer, there is a high chance that you can find a library or framework for any kind of functionality.
  • Python integrates with a wide variety of software, including enterprise applications and databases. It can be easily integrated with other languages like PHP and .NET.

Disadvantages of Python

  • Python is slower compared to compiled options such as C++ and Java since it is an interpreted language.
  • While Python is easy to debug, some errors won’t be shown until runtime.

(Read our comparisons of Python to Java & Go.)

What is Rust?

rustRust is a multiparadigm general-purpose programming language introduced by Graydon Hoare from Mozilla Research. Rust is focused on safety, stability, and performance. It is a statically typed programming language with a memory-efficient architecture and is C/C++ compliant.

Even though Rust is a newer language compared to Python, it has quickly gained popularity within the developer community and is the most loved technology, according to the 2021 StackOverflow developer survey. Rust can also be used in many different domains such as:

  • System developments
  • Web applications
  • Embedded systems
  • Blockchain
  • Game engines

Advantages of Rust

  • Rust is performance-oriented compared to other languages with its fast and memory-efficient architecture with no runtime or garbage collection.
  • Enforces strict safe memory allocations and secure coding practices.
  • Direct safe control over low-level resources. (Comparable to C/C++)

Disadvantages of Rust

  • Relatively higher learning curve compared to languages like Python. A higher degree of coding knowledge is required to use Rust efficiently.
  • Low level of monkey patching support.
  • The compiler can be slow compared to other languages.

Comparing Python vs Rust

Since we have a basic overview of Python and Rust now, let’s compare them to understand how they stack up against each other.

Ease of coding

Python is inherently designed to provide a simpler development experience. Its highly readable code structure and simple syntax provide developers with a better coding experience. Additionally, users can easily adapt Python to any need and start development quickly as it can be used for many use cases.

Meanwhile, Rust is geared more towards system programming and better suited for specific use-cases. Rust will have a steeper learning curve to utilize its features properly.

Performance

As an interpreted language, Python is slower, even with options like CPython geared towards speed. Rust is faster and can be more than twice as fast as Python. Since Rust is compiled directly into machine code, there is no interpreter or virtual machine between the code and the hardware. Another factor that improves the performance of Rust is its memory management.

Even without a garbage collector like in Python, Rust ensures proper memory management from the get-go by enforcing checks for memory leaks and irregular memory behaviors. Overall, Rust has comparable performance to languages such as C and C++ without overheads.

Garbage collection

Rust provides developers the option to store data on the stack or the heap. This feature can be used during the program compilation to determine when the memory is no longer required and should be cleaned.

Moreover, this option clears out data without the need for the program to decide on allocating and cleaning memory. Rust can be easily integrated with other languages without adversely affecting them as it eliminates the need to run a garbage collector constantly.

Python utilizes a garbage collector to check for memory that is not in use and cleans up that unnecessary memory while the program is running.

Documentation

Both these programming languages have excellent documentation by official sources as well as community-provided sources. Python documentation is more beginner-friendly, and even community contributions are clearly defined in an easily understandable manner.

Rust also has simple-to-understand documentation. Yet it’s a bit geared towards more technically experienced users and can be a little complex compared to Python.

Extensibility

Python offers a clear advantage in terms of extensibility due to the sheer number of libraries, frameworks, software, and services that are available for Python or support Python.

While Rust is a relatively new language, it has a rapidly growing ecosystem due to its popularity. However, it is not comparable to the options available with Python.

Error handling

How Python and Rust handle errors is entirely different. Python will throw an exception when an error is encountered. Rust will return a value when an error is found, while Python will simply throw an error without providing any suggestions on how to fix it. Meanwhile, Rust will provide some recommendations to easily pinpoint and fix the issues.

Rust will provide an improved development experience and a better and easier debugging experience than other compiled languages due to its features like:

  • Guaranteed memory safety
  • Reliability and consistency
  • Comparative user-friendliness

Unlike Python, Rust prevents the need for users to wait until runtime to determine some errors.

Security

Rust emphasizes security, and the guaranteed memory is what separates Rust from other similar languages like C or C++. Rust is completely memory safe unless explicitly specified by the developer. According to Secure Rust Guidelines, the compiler tracks how many variables refer to given data and enforce a set of rules to manage and secure memory at any point for a Rust program.

In contrast, Python requires developers to configure the memory management and prevent any memory leaks by themselves.

Community

As both languages are open-source projects, the community is directly involved with the development and improvement of them. Python has a considerably larger community as the more mature platform. You can easily find resources for any kind of Python development need, and you can find guides or answers to most problems with a quick google search.

Rust also has a small yet friendly and highly active community. However, you might need to spend a bit more time finding resources for your exact needs.

Choosing between Python & Rust

Both these languages have their unique approaches to development, with each language excelling in one aspect or another. What to choose depends on the specific use case of the developer.

In general, Python:

  • Will be the easiest language to learn, if you are starting up
  • Also provides a simpler development experience

Meanwhile, Rust:

  • Can have a higher barrier to entry with its complex feature set that will be daunting to new developers.
  • May be an ideal candidate if you want to expand your skillset and learn a second language.

The versatility and extendability of Python are still unmatched. The ability to use Python across many disciplines from web and backend developments, DevOps (Scripting), scientific computing, enterprise applications to machine learning has contributed to the immense popularity of the language. When this versatility is coupled with the user-friendliness of the language, Python is one of the most sought-after languages.

(Learn about Python tools for beginners.)

However, Rust will be the more attractive option if speed and safety are your primary considerations. Its memory-safe nature and speed make Rust the ideal language for tasks like system development, embedded integrations, game engine development, file systems, and VR. Rust provides a programming language with modern sensibilities while facilitating comparable speeds to languages like C/C++.

Rust is quickly gaining popularity between the wider technological community as well as organizations like Mozilla, Dropbox, and Cloudflare. AWS has even provided its SDK for Rust.

Both support powerful programming

Both these languages are powerful programming tools. However, as mentioned previously, the best language depends on the specific requirement of the user. Python will continue to be a popular language with its maturity, widespread adoption, and ease of use. Yet, Rust is also quickly gaining inroads to the developer community with its superior speed and secure architecture.

Related reading

]]>
DAS vs NAS vs SAN: Choosing the Right Storage Solution https://www.bmc.com/blogs/das-vs-nas-vs-san/ Thu, 13 Jan 2022 08:22:55 +0000 https://www.bmc.com/blogs/?p=51472 Data has become the cornerstone of any technological application arising the need to store and access data reliably and efficiently. The ever-growing need for storage innovations in storage-related technologies has given rise to different storage methods to power consumer and enterprise needs. In this post, we will look at the following storage technologies while comparing […]]]>

Data has become the cornerstone of any technological application arising the need to store and access data reliably and efficiently. The ever-growing need for storage innovations in storage-related technologies has given rise to different storage methods to power consumer and enterprise needs. In this post, we will look at the following storage technologies while comparing their features and use cases.

  • Direct Attached Storage (DAS)
  • Network Attached Storage (NAS)
  • Storage Area Network (SAN)

In this article, we’ll look at each and help you determine which is best for your use case.

What is Direct Attached Storage (DAS)?

Direct attached storage is the simplest storage type: a storage device is directly attached to a host device. A typical example of a DAS is an external storage device attached to a pc or a server. DAS devices can consist of multiple hard drives in a single enclosure without any network connectivity. Following are some of the connectivity options that can be utilized for Direct Attached Storage devices:

  • USB (Universal Serial Bus)
  • SATA (Serial Advanced Technology Attachment)
  • eSATA (External Serial Advanced Technology Attachment)
  • SAS (Serial Attached SCSI)

Direct Attached Storage

Advantages of DAS

Performance is the primary advantage of Direct Attached Storage as the devices are the performance. As the storage is directly attached to a host machine, it can provide the best performance and latency, only subject to the inherent limitations of the connectivity interfaces. They can be cost-effective solutions with limited maintenance requirements depending on the configuration.

Disadvantages of DAS

DAS is not suitable for providing storage connectivity to multiple users or devices due to their directly attached nature and not having network connectivity. Additionally, DAS has limited expandability, which is limited by the number of connections a host pc can utilize and the number of drives the DAS solution supports.

What is Network Attached Storage (NAS)?

NAS or Network Attached Storage is by far the most popular storage solution for both consumer and enterprise users. It is a file-level data storage solution that provides storage via the network. NAS devices can provide simultaneous connectivity to multiple users or PCs/servers without significant performance penalties.

Most NAS devices come as multi-device (SSD/HDD) enclosures except for a few consumer-focused NAS devices. The disks in a NAS are typically combined using a redundancy solution like a RAID array or a proprietary solution like Synology Hybrid RAID (SHR). The availability of these options depends on the NAS provider and the available feature set. Most NAS devices support typical RAID configurations from RAID1 to RAID5, while more advanced NAS devices support advanced RAID configurations such as RAID 50 and RAID 60.

Typical NAS solutions use networking protocols such as SMB (Server Message Block) or NFS (Network File System) to provide connectivity across a network. NAS devices come in all different sizes and configurations. Different companies such as Synology and QNAP provide NAS products from general consumer needs to data centers. At the same time, larger solution providers like Dell, HP focus on the enterprise market.

Advantages of NAS

NAS provides connectivity as the storage is connected to the network, not individual hosts. The storage can be easily shared between virtually unlimited numbers of hosts. NAS solutions can be easily scalable as most devices support integrating additional enclosures to expand the storage. Moreover, they feature increased data redundancy as NAS devices can natively support RAID configurations. These NAS devices can be easily configured and require relatively minimal maintenance.

NAS devices have evolved from simple storage solutions to full flagged mini servers. NAS operating systems like QNAP and Synology provide additional functionality—such as running email servers, containers, and VM solutions—directly in a NAS. They have become widely popular options in the prosumer market.

Disadvantages of NAS

The performance of the device is dependent on the network as NAS relies on the network. For instance, a slow network can negatively impact a NAS solution. Therefore, network speed and latency should be primary considerations when incorporating NAS solutions.

Furthermore, NAS devices utilize their CPU, RAM, etc.. with limited upgrade options. Thus, it might be difficult to upgrade the underlying hardware if the device faces performance issues.

What is a Storage Area Network (SAN)?

The enterprise and datacenter-oriented storage solution Storage Area Network is a dedicated high-performance storage solution that provides block-level storage access to host devices. SANs evolved from the need to provide reliable and scalable block-level storage in enterprise and data center workloads. Before SANs, the only option to gain block-level access was to use internal storage devices connected to separate servers. It severely limited the available storage capacity or led to expensive multi-server systems.

SAN addressed this issue by allowing users to have storage as a separate device accessible via the network. This solution provides the performance of a DAS with the flexibility of the NAS yet with significantly increased configuration and maintenance complexity. SAN typically uses the following protocols to facilitate connectivity between the hosts and the SAN solution:

  • SCSI (Small Computer Systems Interface)
  • ISCSI (Internet Small Computer Systems Interface)
  • Fiber Channel

Companies like IBM, Dell EMC, Hewlett Packard Enterprise, and NetApp actively provide SAN solutions. While SAN also supports redundancy options such as RAID, it goes beyond simple RAID configurations and supports multi-array RAID, caching, and even inbuilt disaster recovery and backup options.

Storage Area Network

Advantages of SAN

Since storage Area Networks are targeted towards mission-critical enterprise workloads, they provide the best performance, reliability, and scalability of all the storage solutions discussed above. The block-level access allows enterprises to eliminate the need for local storage and facilitate environments with network boot where host devices access all storage needs from the SAN, including the operating system. SAN can provide nearly unlimited capacity and scalability compared to other solutions, as any number of SAN devices can be added to a storage solution.

Disadvantages of SAN

The major disadvantage of a SAN is the complexity. Even basic SAN solutions can be complex to implement and maintain, requiring dedicated hardware and software solutions with optimized high-performance network architectures to provide the best possible performance.

Choosing a storage solution

Ultimately, the storage solution to choose depends on the user requirements as each solution offers distinct advantages and disadvantages.

NAS will be the ideal option for most use cases regardless of whether you are a consumer or an enterprise user. It can provide the most cost-effective solution without sacrificing performance if you need to provide reliable storage between multiple users.

On the other hand, DAS is ideal for small businesses where the primary goal is to store or backup data from a single host. However, small and budget-friendly NAS solutions will provide a more robust solution even for this use case with the ability to expand in the future.

Finally, SAN must only be considered when dealing with a data center or mission-critical enterprise applications. They are only suitable for critical workloads due to the complexity and cost constraints of implementing and maintaining a SAN solution. Furthermore, SANs are ideal when dealing with thousands of users in enterprise environments with dedicated IT support teams to maintain and optimize the storage solution.

Conclusion

In this post, we have discussed DAS, NAS, and SAN storage solutions. The choice is up to the user as each solution excels at different use cases with different feature sets. Users or organizations can select the ideal storage solution for their use case by properly evaluating the requirements for a dedicated storage solution, expected feature set, performance, and scalability combined with the CapEx and OpEx considerations.

Related reading

]]>
What’s EAS? Enterprise Application Software Explained https://www.bmc.com/blogs/enterprise-application-software-defined-how-is-it-different-from-other-software/ Thu, 30 Dec 2021 00:00:48 +0000 http://www.bmc.com/blogs/?p=11012 Application software comes in many different types aimed at specific requirements, platforms, user bases, etc.  Enterprise Application Software (EAS) is one popular software type. As the name suggests, the goal of enterprise application software is to fulfill the needs of an enterprise. This software will be large in scale, covering most aspects of the organization. […]]]>

Application software comes in many different types aimed at specific requirements, platforms, user bases, etc.  Enterprise Application Software (EAS) is one popular software type.

As the name suggests, the goal of enterprise application software is to fulfill the needs of an enterprise. This software will be large in scale, covering most aspects of the organization. It can be either:

  • A single software spread across the organizational structure
  • Multiple enterprise software applications specialized for different requirements

In this article, we will look at enterprise application software and how it differentiates from other types of software.

What is an enterprise?

Before looking at enterprise application software, let’s define what an enterprise is. The literal meaning of enterprise can be related to a business organization, most commonly a large-scale business venture.

The phrase enterprise can be used to describe any business venture from a self-employed entrepreneur to SME. However, an enterprise typically refers to a large-scale organization with many business functions in both the public and private sectors. Some well-known enterprise organizations include:

  • Multinational organizations or businesses
  • Federal, state, or local government entities
  • Medium- to large-scale national companies
  • School groups and districts
  • Non-profit or charitable organizations spread across multiple areas or regions

What unifies the examples mentioned above is that employees in an enterprise setting will require access to a vast amount of information or functions to carry out their job roles. These job roles can range from sales, customer support, IT to finance and even analytics. At an information level, this data can range from sales data, customer data, security and policy information, product specifications, communication logs, productivity measurements to key performance indicators (KPIs) and service level agreements (SLAs).

(Compare KPIs & SLAs.)

To put it all together: an enterprise is a large organization with a relatively large employee base with varying roles conducting different functions.

What is Enterprise Application Software?

Since we now know what an enterprise is, let’s dive into enterprise application software. The first thing to wrap your head around this type of application is its functional scale. As these applications aim to meet the needs of an enterprise, their functionality must cover a relatively large requirement base. In general, enterprise application software is at the heart of an enterprise, providing a mission-critical solution to the entire—or the majority of the—organization.

In simple terms, a specific piece of software that covers most if not all of the tasks inherent to an enterprise setting can be defined as an Enterprise Application Software.

Characteristics of enterprise application software

Enterprise application software can be broken down into two categories:

  • Software that visualizes, manipulates, and stores a large amount of complex data. One thing to note here is that while data warehouses or data analytics software are enterprise solutions, they do not come under the EAS umbrella and are considered separate software.
  • Software that helps in business processes, ranging from business support to automation.

EAS software belonging to both these categories can have different characteristics depending on the underlying requirements. However, we can observe the following characteristics in general.

  • The widespread nature. This software needs to power an entire organization that may be spread across different geographical locations. So, it should be able to provide functionality and performance across all those locations of the organization. With more and more organizations powered by remote workforces, most EAS software has functionality baked into to support individual employees working remotely.
  • Scalability & robustness. This is a basic requirement of any software application. However, its importance is further emphasized in an enterprise environment as this software facilitates the mission-critical function of the organization. The software should be able to scale according to the growing business needs without compromising stability or functionality.
  • Centralized management & administration. This is a no-brainer: the EAS must be able to provide the critical functionality of the enterprise and help the enterprise achieve its objectives and goals.
  • Business-oriented and supports the core goal of the enterprise. This is a no-brainer: the EAS must be able to provide functionality that is critical to the enterprise and help the enterprise achieve its objectives and goals.
  • Flexibility & extensibility. With the constantly evolving global landscape, enterprise requirements can also change abruptly. In such instances, an EAS should be flexible enough to quickly adapt to a changing workflow with minimal modification and without hindering the overall business process. Additionally, as an enterprise typically utilizes multiple software services and platforms, an EAS must have the ability to interact with these services using an API, plugins, extensions, etc.

Types of Enterprise Application Software

No single software application can facilitate all the needs of an organization. In most cases, there are specialized EAS applications suited for different requirements of the organization. Some of this application software can be listed as the following.

  • Human Resource Management Systems
  • Payroll Management Systems
  • Customer Support and Customer Relationship Management (CRM)
  • Email Systems
  • Marketing and Sales Management Systems
  • Incident Management Systems
  • Enterprise Resource Planning (ERP)
  • Project and Portfolio management
  • Supply-chain-management Software (SCMS)
  • Office Suites

All the above software is targeted at facilitating different requirements of an enterprise. Most of the time, a typical enterprise will rely on multiple systems to cover all its requirements. This is where the extendibility which was discussed above comes into play. The reason is that an EAS with a larger array of connectivity options, including other platforms, offers enterprises more freedom to choose and match different EAS to supplement their needs without being vendor-locked.

How enterprise application software differs from other software

In the previous sections, we had a look at what an EAS is and the different types of EAS available. So, what exactly makes this EAS different from other types of software? There are two types of software:

  • System software. The software that is responsible for the core functionality of the system and provides the interface between the underlying hardware resources and application software. Operating Systems such as Windows, Linux, macOS, Android, and iOS come under the system software category.
  • Application software. Application software sits on top of the system software and provides different functionality to users. This software can range from a simple email client or a web browser to more complex applications such as games, CAD and video editing software, AI and ML software, and software to build software. EAS comes under the application software umbrella.

While typical software such as web browsers, document editors are designed to be used by single individuals, they are also used by enterprises. However, this software is not considered a part of the EAS umbrella. Other than scale, what differentiates EAS is being designed to be used by many individuals across the organization while providing specific functionality targeted at specific business needs.

EAS & cloud services

The popularity of cloud services and increased reliance on cloud-based managed platforms have changed how most organizations approach Enterprise Application Software. Previously, the common practice was to purchase or internally build an EAS, host the application in an on-premise environment, and manage all aspects of the software, from hardware to updates manually.

With software as a service (SaaS), the cloud can now provide most organizations with a simpler solution to fulfill their EAS needs. SaaS solutions are available for organizations regardless of the type of software needed. Services like Zendesk for CRM, Microsoft Dynamics 365, SAP ERP, and Salesforce provide comprehensive EAS solutions that can be easily customized to support any workflow of an enterprise.

As these services are delivered as managed solutions, enterprises can free themselves from managing this software and hardware resource while only being responsible for the configurations. On top of that, solutions like Microsoft Dynamics support on-premises deployments that enable enterprises to facilitate hybrid environments where sensitive data resides within the enterprise-managed system. This feature allows enterprises to leverage the advantages of both cloud-based and on-premises deployments.

Implementing a cloud-first EAS solution will be ideal for many organizations moving forward with many other services like data warehouses, endpoint security, email, and IT also available as cloud services. The primary obstacle for a cloud-first approach for EAS was the security and compliance requirements. However, services like dedicated servers and tenancy, isolated environments, geographically separated data services, SD-WAN, stricter compliance, and security enforcements have paved the way for EAS to benefit from all the advantages of Cloud without compromising privacy or security.

Selecting the right EAS solution

Enterprise Application Software has become a core component of a successful enterprise. However, selecting the right EAS solution can be a daunting process with a myriad of EAS solutions available for different enterprise requirements.

SaaS offers enterprises more freedom when it comes to selecting the ideal EAS solution that meets their specific requirements without incurring significant upfront investments.

BMC supports enterprise applications

BMC is software company that has been supporting enterprise organizations for over 40 years. With solutions for service and operations management, workload automation, and the mainframe, practically any part of your organization can benefit from BMC solutions. Explore BMC Helix, Control-M, and our BMC Automated Mainframe Intelligence (AMI) portfolios.

Related reading

]]>
What Is CI/CD? Continuous Integration & Continuous Delivery Explained https://www.bmc.com/blogs/what-is-ci-cd/ Thu, 30 Dec 2021 00:00:31 +0000 https://www.bmc.com/blogs/?p=13621 Flexibility, speed, and quality are the core pillars of modern software development. Increased customer demand and the evolving technological landscape have made software development more complex than ever, making traditional software development lifecycle (SDLC) methods unable to cope with the rapidly changing nature of developments. Practices like Agile and DevOps have gained popularity in facilitating […]]]>

Flexibility, speed, and quality are the core pillars of modern software development. Increased customer demand and the evolving technological landscape have made software development more complex than ever, making traditional software development lifecycle (SDLC) methods unable to cope with the rapidly changing nature of developments.

Practices like Agile and DevOps have gained popularity in facilitating these changing requirements by bringing flexibility and speed to the development process without sacrificing the overall quality of the end product.

Together, Continuous Integration (CD) and Continuous Delivery (CD) is a key aspect that helps in this regard. It allows users to build integrated development pipelines that spread from development to production deployments across the software development process. So, what exactly are Continuous Integration and Continuous Delivery? Let’s take a look.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is CI/CD?

CI/CD refers to Continuous Integration and Continuous Delivery. In its simplest form, CI/CD introduces automation and monitoring to the complete SDLC.

  • Continuous Integration can be considered the first part of a software delivery pipeline where application code is integrated, built, and tested.
  • Continuous Delivery is the second stage of a delivery pipeline where the application is deployed to its production environment to be utilized by the end-users.

Let’s deep dive into CI and CD in the following sections.

What is Continuous Integration?

Modern software development is a team effort with multiple developers working on different areas, features, or bug fixes of a product. All these code changes need to be combined to release a single end product. However, manually integrating all these changes can be a near-impossible task, and there will inevitably be conflicting code changes with developers working on multiple changes.

Continuous Integrations offer the ideal solution for this issue by allowing developers to continuously push their code to the version control system (VCS). These changes are validated, and new builds are created from the new code that will undergo automated testing.

This testing will typically include unit and integration tests to ensure that the changes do not cause any issues in the application. It also ensures that all code changes are properly validated, tested, and immediate feedback is provided to the developer from the pipeline in the event of an issue enabling them to fix that issue quickly.

This not only increases the quality of the code but also provides a platform to quickly identify code errors with a shorter automated feedback cycle. Another benefit of Continuous Integrations is that it ensures all developers have the latest codebase to work on as code changes are quickly merged, further mitigating merge conflicts.

The end goal of the continuous integration process is to create a deployable artifact.

What is Continuous Delivery?

Once a deployable artifact is created, the next stage of the software development process is to deploy this artifact to the production environment. Continuous delivery comes into play to address this need by automating the entire delivery process.

Continuous Delivery is responsible for the application deployment as well as infrastructure and configuration changes, monitoring and maintaining the application. CD can extend its functionally to include operational responsibilities such as infrastructure management using automation tools such as:

  • Terraform
  • Ansible
  • Chef
  • Puppet

Continuous Delivery also supports multi-stage deployments where artifacts are moved through different stages like staging, pre-production, and finally to production with additional testing and verifications at each stage. These additional testing and verification further increase the reliability and robustness of the application.

Why we need CI/CD

CI/CD is the backbone of all modern software developments allowing organizations to develop and deploy software quickly and efficiently. It offers a unified platform to integrate all aspects of the SDLC, including separate tools and platforms from source control, testing tools to infrastructure modification, and monitoring tools.

A properly configured CI/CD pipeline allows organizations to adapt to changing consumer needs and technological innovations easily. In a traditional development strategy, fulfilling changes requested by clients or adapting new technology will be a long-winded process. Moreover, the consumer need may also have shifted when the organization tries to adapt to the change. Approaches like DevOps with CI/CD solve this issue as CI/CD pipelines are much more flexible.

For example: suppose there is a consumer requirement that is not currently addressed with a DevOps approach. In that case, it can be quickly identified, analyzed, developed, and deployed to the software product in a relatively short amount of time without disrupting the normal development flow of the application.

Another aspect is that CI/CD enables quick deployment of even small changes to the end product, quickly addressing user needs. It not only resolves user needs but also provides visibility of the development process to the end-user. End-users can see that the product grows with frequent deployments related to bug fixes or new features.

This is in stark contrast with traditional approaches like the waterfall model, where the end-users only see the final product after the complete development is done.

CI/CD today

CI/CD has come a long way since its inception, where it began only as a platform to support application delivery. Now it has evolved to support other aspects, such as:

  • Database DevOps, where database changes are continuously delivered.
  • GitOps, where infrastructure is defined in a declarative version-controlled manner to be managed via CI/CD pipelines.

Thus, users can integrate almost all aspects of the software delivery into Continuous Integration and Continuous Delivery. Furthermore, CI/CD can also extend itself to DevSecOps, where security testing such as vulnerability scans, configuration policy enforcements, network monitoring, etc., can be directly integrated into CI/CD pipelines.

CI/CD pipeline & workflows

CI/CD pipeline is a software delivery process created through Continuous Integration and Continuous Delivery platforms. The complexity and the stages of the CI/CD pipeline vary depending on the development requirements.

Properly setting up CI/CD pipeline is the key to benefitting from all the advantages offered by CI/CD. One pipeline might have a multi-stage deployment strategy that delivers software as containers to a multi-cloud Kubernetes cluster, and another may be a simple pipeline that builds, tests, and deploys the application as a serverless function.

A typical CI/CD pipeline can be broken down into the following stages:

  1. Development. This stage is where the development happens, and the code is merged to a version control repository and validated.
  2. Build. The application is built using the validated code, and this artifact is used for testing.
  3. Testing. Usually, the built artifact is deployed to a test environment, and extensive tests are carried out to ensure the functionality of the application.
  4. Deploy. This is the final stage of the pipeline, where the tested application is deployed to the production environment.

All the above stages are continuously monitored for any errors and quickly notified to the relevant parties.

Advantages of Continuous Integration & Delivery

CI/CD undoubtedly increases the speed and the efficiency of the software development process while providing a top-down view of all the tasks involved in the delivery process. On top of that, CI/CD will have the following benefits reaching all aspects of the organization..

  • Improve developer and QA productivity by introducing automated validations, builds, and testing
  • Save time and resources by automating mundane and repeatable tasks
  • Improve overall code quality
  • Increase the feedback cycles with each stage and the process in the pipeline being continuously monitored
  • Reduce the bugs or defects in the system
  • Provide the ability to support other areas of application delivery, such as database and infrastructure changes directly through the pipeline
  • Support varying architectures and platforms from traditional server-based deployment to container and serverless architectures
  • Ensure the application’s reliability, thanks to the ability to monitor the application in the production environment with continuous monitoring

CI/CD tools & platforms

When it comes to CI/CD tools and platforms, there are many choices ranging from simple CI/CD platforms to specialized tools that support a specific architecture. There are even tools and services directly available through source control systems. Let’s look at some of the popular CI/CD tools and platforms.

Continuous Integration tools & platforms

  • Jenkins
  • TeamCity
  • Travis CI
  • Bamboo
  • CircleCI

Continuous Delivery tools & platforms

  • ArgoCD
  • JenkinsX
  • FluxCD
  • GoCD
  • Spinnaker
  • Octopus Deploy

Cloud-Based CI/CD

  • Azure DevOps
  • Google Cloud Build
  • AWS CodeBuild/CodeCommit/CodeDeploy
  • GitHub Actions
  • GitLab Pipelines
  • Bitbucket Pipelines

Summing up CI/CD

Continuous Integration and Continuous Delivery have become an integral part of most software development lifecycles. With continuous development, testing, and deployment, CI/CD has enabled faster, more flexible development without increasing the workload of development, quality assurance, or the operations teams.

Today, CI/CD has evolved to support all aspects of the delivery pipelines, thus also facilitating new paradigms such as GitOps, Database DevOps, DevSecOps, etc.—and we can expect more to come.

BMC supports Enterprise DevOps

From legacy systems to cloud software, BMC supports DevOps across the enter enterprise. Learn more about Enterprise DevOps.

Related reading

 

]]>
Test Automation Frameworks: The Ultimate Guide https://www.bmc.com/blogs/test-automation-frameworks/ Fri, 10 Dec 2021 01:00:02 +0000 http://www.bmc.com/blogs/?p=12115 Quality assurance (QA) is a major part of any software development. Software testing is the path to a bug-free, performance-oriented software application—one that also satisfies (or exceeds!) end-user requirements. Of course, manual testing is quickly unscalable due to the rapid pace of development and ever-increasing requirements. Thus, a faster yet accurate testing solution was required, […]]]>

Quality assurance (QA) is a major part of any software development. Software testing is the path to a bug-free, performance-oriented software application—one that also satisfies (or exceeds!) end-user requirements.

Of course, manual testing is quickly unscalable due to the rapid pace of development and ever-increasing requirements. Thus, a faster yet accurate testing solution was required, and automated testing became the ideal solution for this need. Automated testing does not mean replacing the entire manual testing process. Instead automated testing means:

  1. Allowing users to automate most routine and repetitive test cases.
  2. Freeing up valuable time and resources to focus on more intricate or complex test scenarios.

Introducing automated testing to a delivery pipeline can be a daunting process. Several factors—the programming language, user preferences, test cases, and the overall testing scope—directly decide what can and cannot be automated. However, if set up correctly, automated testing can be the backbone of the QA team to ensure a smooth and scalable testing experience.

Different types of automation frameworks came into prominence to aid in this endeavor. An automation framework allows users to easily set up an automated test environment that ultimately helps in providing a better ROI for both development and QA teams. In this article, we will have a look at different types of test automation frameworks available and their advantages and disadvantages.

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

What is a test automation framework?

Before diving into different types of test automation frameworks, we need to understand what an automation framework is. Test automation is the process of automating repetitive and predictable testing scenarios.

A test automation framework is a set of guidelines or rules that can be used to define test cases. These test cases can then be configured and implemented using test automation tools such as Selenium, Puppeteer, etc., to the delivery process via a CI/CD pipeline.

A test automation framework will consist of practices and tools that are designed to create efficient test cases. These practices range from coding standards, test-data handling methods, object repository management, and managing access control to test environment and external tools, etc. However, testers have more freedom than this. Testers are:

  • Not confined to these rules or guidelines
  • Free to create test cases in their preferred way

Still, a framework provides standardization across the testing process, leading to a more efficient, secure, and compliant testing process.

Advantages of a test automation framework

There are some key advantages of adhering to the rules and guidelines offered by a test automation framework. These advantages include:

  • Increased speed and efficiency of the overall testing process
  • Improved accuracy and repeatability of the test cases
  • Lower maintenance requirements with standardized practices and processes
  • Reduced manual intervention and human error
  • Maximized test coverage across all areas of the application, from the GUI to internal application logic

Top Test Automation Frameworks

Popular test automation frameworks

When it comes to test automation frameworks, there are six leading frameworks available these days. In this section, we will look at each of these six frameworks with regard to their architecture, advantages, and disadvantages:

  • Linear automation framework
  • Modular-driven framework
  • Library architecture framework
  • Data-driven framework
  • Keyword-driven framework
  • Hybrid testing framework

Linear Automation Framework

The linear framework or the record and playback framework is best suited for basic, introductory level testing.

In a linear automation framework, users target a specific program functionality, create test scripts in sequential order and run them individually. This process includes capturing all the tests like navigation, inputs, etc., and playing them back repeatedly to conduct the test.

Advantages of Linear Framework

  • Does not require specific automation knowledge or custom code
  • It is easier to understand test cases due to sequential order
  • Faster approach to testing
  • Simper implementation to existing workflows and most automation tools provides inbuilt tools for record and playback functionality

Disadvantages of Linear Framework

  • Test cases are not reusable as they are targeted towards specific use cases or functions
  • With static data, there is no option to run tests with different data sets as test data is hardcoded
  • Maintenance can be complex as any change will require rebuilding test cases

Modular Driven Framework

This framework takes a modular approach to testing which breaks down tests into separate units, functions, or modules and will be tested in isolation. These separate test scripts can be combined to build larger tests covering the complete application or specific functionality.

(Learn about unit testing, function testing, and more.)

Advantages of Modular Framework

  • Increased flexibility of test cases. Individual sections can be quickly edited and modified as tests are separated
  • Increased reusability as individual test cases can be modified from different overarching modules to suit different needs
  • The ability to scale up testing quickly and efficiently to include any new functionality

Disadvantages of Modular Framework

  • Can be complex to implement and require proper programming knowledge to build and set up test cases
  • Cannot be used with different test data sets in a single test case

Library Architecture Framework

This framework is derived from the modular framework that aims to provide a greater level of modularity to testing by breaking down tests by units, functions, etc.

The library architecture framework identifies similar tasks within test scripts and groups them by function. These modular parts aren’t directly about function—they’re more focused on common objectives. Then these functions are stored in a library sorted by their objectives, and test scripts call upon this library to obtain different functionality when testing.

Advantages of Library Architecture Framework

  • A high level of modularity leads to increased scalability of test cases
  • Increased reusability as libraries can be used across different test scripts
  • Can be a cost-effective solution due to its reusability, especially in larger projects

Disadvantages of Library Architecture Framework

  • Can be complex to set up and integrate into delivery pipelines
  • Technical expertise is required to identify and modularize the common tasks
  • Test data are static as they are hardcoded in script with any changes requiring direct changes to the scripts

Data-Driven Framework

The main feature of the data-driven framework is that it decouples data from the script logic. It is the ideal framework when users need to test a function or scenario with different data sets but still use the same internal logic.

In data-driven frameworks, values such as inputs and outputs are passed as parameters to test scripts from external data sources such as variable files, databases, etc.

Advantages of Data-Driven Framework

  • Decoupled approach to data and logic leads to increased reusability of test cases while providing the ability to test under different data sets without modifying the test case
  • Handle multiple scenarios using the same test scripts with varying sets of data, which leads to faster test environments
  • Since there is no need to hardcode data, scripts can be changed without affecting the overall functionality
  • Easily adaptable to suit any testing need

Disadvantages of Data-Driven Framework

  • One of the most complex frameworks to implement as decoupling data and logic will require expert knowledge both in automation and the application itself
  • Can be time-consuming and a resource-intensive process to implement in the delivery pipeline

Keyword-Driven Framework

The keyword-driven framework takes the decoupling of data and the logic introduced in the data-driven framework a step further. In addition to the data being stored externally, specific keywords that are associated with different actions and used to test the GUI are also stored externally to be referenced at the test execution.

It makes keywords independent entities that reference specific functions or actions that are associated with specific objects. Users write code to prompt the necessary keyword-based action, and the appropriate script is executed within the test when the keyword is referenced.

Advantages of Keyword-Driven Framework

  • Test scripts can be built independently of the application
  • Increased reusability and flexibility while providing a detailed approach to categorize test functionality
  • Reduced maintenance requirements compared to non-decoupled frameworks

Disadvantages of Keyword-Driven Framework

  • One of the most complex frameworks to configure and implement, requiring a considerable investment of resources
  • Keywords need to be scaled according to the application testing needs, which can lead to increased complexity with each test scope or requirement change

Hybrid Testing Framework

A hybrid testing framework is not a predefined framework with its architecture or rules but a combination of previously mentioned frameworks.

Depending on a single framework will not be a feasible endeavor with the ever-increasing need to cater to different test scenarios. Therefore, different types of frameworks are combined in most development environments to best suit the application testing needs while leveraging the strengths of each framework and mitigating the disadvantages.

With the popularity of DevOps and agile practices, more flexible frameworks are needed to cope with the changing environments. Therefore, a hybrid approach provides the best solution by allowing users to mix and match frameworks to obtain the best results for their specific testing requirements.

Customizing your frameworks

Selecting a test automation framework is the first step towards creating an automated testing environment. However, relying on a single framework has become a near-impossible task due to the ever-evolving nature of the technological landscape and rapid development cycles. That’s why the hybrid testing framework has gained popularity—for enabling users to combine different test automation frameworks to build an ideal automation framework for their needs.

Even if you are new to the automation world, you can start with a framework with many built-in solutions, build on top of it and customize it to create the ideal framework.

Related reading

]]>
DBMS: Database Management Systems Explained https://www.bmc.com/blogs/dbms-database-management-systems/ Thu, 09 Dec 2021 01:00:24 +0000 https://www.bmc.com/blogs/?p=12564 Data is the cornerstone of any modern software application, and databases are the most common way to store and manage data used by applications. With the explosion of web and cloud technologies, databases have evolved from traditional relational databases to more advanced types of databases such as NoSQL, columnar, key-value, hierarchical, and distributed databases. Each […]]]>

Data is the cornerstone of any modern software application, and databases are the most common way to store and manage data used by applications.

With the explosion of web and cloud technologies, databases have evolved from traditional relational databases to more advanced types of databases such as NoSQL, columnar, key-value, hierarchical, and distributed databases. Each type has the ability to handle structured, semi-structured, and even unstructured data.

On top of that, databases are continuously handling mission-critical and sensitive data. When this is coupled with compliance requirements and the distributed nature of most data sets, managing databases has become highly complex. As a result, organizations require robust, secure, and user-friendly tools to maintain these databases.

This is where database management systems come into play—by offering a platform to manage databases. Let’s take a look.

What is a database management system?

A database management system (DBMS) is a software tool that enables users to manage a database easily. It allows users to access and interact with the underlying data in the database. These actions can range from simply querying data to defining database schemas that fundamentally affect the database structure.

Furthermore, DBMS allow users to interact with a database securely and concurrently without interfering with each user and while maintaining data integrity.


Unlock the potential of IT Service Management with BMC Helix ITSM. ›

Database tasks in a DBMS

The typical database administrative tasks that can be performed using a DBMS include:

  • Configuring authentication and authorization. Easily configure user accounts, define access policies, modify restrictions, and access scopes. These operations allow administrators to limit access to underlying data, control user actions, and manage users in databases.
  • Providing data backups and snapshots. DBMS can simplify the backup process of databases by providing a simpler and straightforward interface to manage backups and snapshots. They can even move these backups to third-party locations such as cloud storage for safekeeping.
  • Performance tuning. DBMS can monitor the performance of databases using integrated tools and enable users to tune databases by creating optimized indexes. It reduces I/O usage to optimize SQL queries, enabling the best performance from the database.
  • Data recovery. In a recovery operation, DBMS provides a recovery platform with the necessary tools to fully or partially restore databases to their previous state—effortlessly.

All these administrative tasks are facilitated using a single management interface. Most modern DBMS support handling multiple database workloads from a centralized DBMS software, even in a distributed database scenario. Furthermore, they allow organizations to have a governable top-down view of all the data, users, groups, locations, etc., in an organized manner.

(Explore the role of DBAs, or database administrators.)

DBMS system schematic

The following diagram illustrates the schematic of a DBMS system:

DBMS system schematic

Components of a database management system

All DBMS comes with various integrated components and tools necessary to carry out almost all database management tasks. Some DBMS software even provides the ability to extend beyond the core functionality by integrating with third-party tools and services, directly or via plugins.

In this section, we will look at the common components that are universal across all DBMS software, including:

  • Storage engine
  • Query language
  • Query processor
  • Optimization engine
  • Metadata catalog
  • Log manager
  • Reporting and monitoring tools
  • Data utilities

DBMS Components

Storage engine

The storage engine is the core component of the DBMS that interacts with the file system at an OS level to store data. All SQL queries which interact with the underlying data go through the storage engine.

Query language

A database access language is required for interacting with a database, from creating databases to simply inserting or retrieving data. A proper DBMS must support one or multiple query languages and language dialects. Structured query language (SQL) and MongoDB Query Language (MQL) are two query languages that are used to interact with the databases.

In many query languages, the query language functionality can be further categorized according to specific tasks:

  • Data Definition Language (DDL). This consists of commands that can be used to define database schemas or modify the structure of database objects.
  • Data Manipulation Language (DML). Commands that directly deal with the data in the database. All CRUD operations come under DML.
  • Data Control Language (DCL). This deals with the permissions and other access controls of the database.
  • Transaction Control Language (TCL). Command which deals with internal database transactions.

Query processor

This is the intermediary between the user queries and the database. The query processor interprets the queries of users and makes them actionable commands that can be understood by the database to perform the appropriate functionality.

Optimization engine

The optimization Engine allows the DBMS to provide insights into the performance of the database in terms of optimizing the database itself and queries. When coupled with database monitoring tools, it can provide a powerful toolset to gain the best performance out of the database.

Metadata catalog

This is the centralized catalog of all the objects within the database. When an object is created, the DBMS keeps a record of that object with some metadata about it using the metadata catalog. Then, this record can be used to:

  • Verify user requests to the appropriate database objects
  • Provide an overview of the complete database structure

Log manager

This component will keep all the logs of the DBMS. These logs will consist of user logins and activity, database functions, backups and restore functions, etc. The log manager ensures all these logs are properly recorded and easily accessible.

(Compare logs to monitoring.)

Reporting & monitoring tools

Reporting and monitoring tools are another standard component that comes with a DBMS. Reporting tools will enable users to generate reports while monitoring tools enable monitoring the databases for resource consumption, user activity, etc.

Data utilities

In addition to all the above, most DBMS software comes with additional inbuilt utilities to provide functionality such as:

  • Data integrity checks
  • Backup and restore
  • Simple database repair
  • Data validations
  • Etc.

Scale operational effectiveness with an artificial intelligence for IT operations. Learn more about AIOps with BMC! ›

Types of database management systems

There are many different types of DBMS, yet we can categorize the most commonly used DBMS into three types.

Relational database management systems (RDBMS)

This is the most common type of DBMS. They are used to interact with databases that contain structured data in a table format with predefined relationships. Moreover, they use structured query language (SQL) to interact with databases. Microsoft SQL, MySQL, and Oracle Database are some popular DBMS that come under this category.

Document database management systems (DoDBMS)

These DoDBMS are used to manage databases that contain data stored in JSON-like structures with limited or no relationship structure. They are powered by query languages such as MongoDB query language (MQL) for database operations. MongoDB, Azure Cosmos DB are some prominent examples of DoDBMS.

Columnar database management systems (CDBMS)

As the name suggests, this type of DBMS is used to manage columnar databases that store data in columns instead of rows, emphasizing high performance. Some databases that use columnar format are Apache Cassandra, Apache HBase, etc.

Advantages of a DBMS

DBMS was introduced to solve the fundamental issues associated with storing, managing, accessing, securing, and auditing data in traditional file systems. Software users and organizations can gain the following benefits by using DBMS:

Increased data security

DBMS provides the ability to control users and enforce policies for security and compliance management. This controlled user access increases the database security and makes the data less vulnerable to security breaches.

Simple data sharing

DBMS enables users to access the database securely regardless of their location. Thus, they can handle any database-related task promptly without the need for complex access methods or worrying about database security. On top of that, DBMS allows multiple users to collaborate effectively when interacting with the database.

Data integration

DBMS allows users to gain a centralized view of databases spread across multiple locations and manage them using a single interface rather than operating them as separate entities.

Abstraction & independence

DBMS enables users to change the physical schema of a database without changing the logical schema that governs database relationships. As a result, organizations can scale the underlying database infrastructure without affecting the database operations.

Furthermore, any change to the logical schema can also be carried out without affecting applications that access the databases.

Streamlined backup & recovery mechanism

Most databases have built-in backup and recovery tools. Yet, DBMS offers centralized tools to facilitate backup and recovery functionality more conveniently and thereby provide a better user experience. Securing data has become easier than ever with functionality like:

  • Automated snapshots
  • Backup scheduling
  • Backup verifications
  • Multiple recovery methods

Uniform management & monitoring

DBMS provides a single interface to carry out all the management and monitoring tasks, thus simplifying the workload of database administrators. These tasks can range from database creation and schema modifications to reporting and auditing.

DBMSs are essential

DBMS is an essential component for any organization when it comes to managing databases. The scale, complexity, and feature set of a DBMS will depend on the specific DBMS and requirements of the organizations.

With different DBMS providing different feature sets, it is paramount that organizations rigorously evaluate the DBMS software before committing to a single system. However, a properly configured DBMS will greatly simplify the management and maintenance of databases at any scale.

Related reading

]]>