Search Results for “certifications” – BMC Software | Blogs https://s7280.pcdn.co Tue, 16 May 2023 12:50:49 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Search Results for “certifications” – BMC Software | Blogs https://s7280.pcdn.co 32 32 What Is Terraform? Terraform & Its IaC Role Explained https://s7280.pcdn.co/terraform/ Tue, 29 Mar 2022 13:36:59 +0000 https://www.bmc.com/blogs/?p=51908 Managing infrastructure is a core requirement for most modern applications. Even in PaaS or serverless environments, there will still be components that require user intervention for customization and management. With the ever-increasing complexity of software applications, more and more infrastructure modifications are required to facilitate the functionality of the software. It is unable to keep […]]]>

Managing infrastructure is a core requirement for most modern applications. Even in PaaS or serverless environments, there will still be components that require user intervention for customization and management. With the ever-increasing complexity of software applications, more and more infrastructure modifications are required to facilitate the functionality of the software.

It is unable to keep up with the rapid development cycles with manual infrastructure management. It will create bottlenecks leading to delays in the delivery process.

Infrastructure as Code (IaC) has become the solution to this issue—allowing users to align infrastructure changes with development. It also facilitates faster automated repeatable changes by codifying all the infrastructure and configuration and managing them through the delivery pipeline.

Terraform is one of the leading platform agnostic IaC tools that allow users to define and manage infrastructure as code. In this article, let’s dig into what Terraform is and how we can utilize it to manage infrastructure at scale.

What is Infrastructure as Code?

Before moving into Terraform, we need to understand Infrastructure as Code. To put it simply, IaC enables users to codify their infrastructure. It allows users to:

  • Create repeatable version-controlled configurations
  • Integrate them as a part of the CI/CD pipeline
  • Automate the infrastructure management

If an infrastructure change is needed in a more traditional delivery pipeline, the infrastructure team will have to be informed. The delivery pipeline cannot proceed until the change is made to the environment. Having an inflexible manual process will hinder the overall efficiency of the SDLC with practices like DevOps leading to fast yet flexible delivery pipelines.

IaC allows infrastructure changes to be managed through a source control mechanism like Git and integrated as an automated part of the CI/CD pipeline. It not only automates infrastructure changes but also facilitates auditable changes and easy rollbacks of changes if needed.

What is Terraform?

Terraform is an open-source infrastructure as a code tool from HashiCorp. It allows users to define both on-premises and cloud resources in human-readable configuration files that can be easily versioned, reused, and shared. Terraform can be used to manage both low-level components (like compute, storage, and networking resources) as well as high-level resources (DNS, PaaS, and SaaS components).

Terraform is a declarative tool further simplifying the user experience by allowing users to specify the expected state of resources without the need to specify the exact steps to achieve the desired state of resources. Terraform manages how the infrastructure needs to be modified to achieve the desired result.

Terraform is a platform-agnostic tool, meaning that it can be used across any supported provider. Terraform accomplishes this by interacting with the APIs of cloud providers. When a configuration is done through Terraform, it will communicate with the necessary platform via the API and ensure the defined changes are carried out in the targeted platform. With more than 1,700 providers from HasiCorp and the Terraform community available with the Terraform Registry, users can configure resources from leading cloud providers like Azure, AWS, GCP, and Oracle Cloud to more domain-specific platforms like Cloudflare, Dynatrace, elastic stack, datadog, and Kubernetes.

The Terraform workflow

The Terraform workflow is one of the simplest workflows only consisting of three steps to manage any type of infrastructure. It provides users the flexibility to change the workflow to support their exact implementation needs.

Terraform Workflow

1. Write

The first stage of the workflow is where users create the configurations to define or modify the underlying resources. It can be as simple as provisioning a simple compute instance in a cloud provider to deploy a multi-cloud Kubernetes cluster. This writing part can be facilitated either through HasiCorp Configuration Language (HCL), the default language to define resources or using the Cloud Development Kit for Terraform (CDKTF) which allows users to define resources using any supported common programming languages like Python, C#, Go, and Typescript.

2. Plan

This is the second stage of the workflow where Terraform will look at the configuration files and create an execution plan. It enables users to see the exact charges that will happen to the underlying infrastructure from what new resources are getting created, resourced, modified, and deleted.

3. Apply

This is the final stage of the workflow which takes place if the plan is satisfactory once the user has confirmed the changes. Terraform will carry out the changes to achieve the desired state in a specific order respecting all the resource dependencies. It will happen regardless of whether you have defined dependencies in the configuration. Terraform will automatically identify the resource dependencies of the platform and execute the changes without causing issues.

Terraform uses the state to keep track of all the changes to the infrastructure and detect config drifts. It will create a state file at the initial execution and subsequently update the state file with new changes. This state file can be stored locally or in a remote-backed system like an s3 bucket. Terraform always references this state file to identify the resources it manages and keep track of the changes to the infrastructure.

Benefits of Terraform

Let’s look at why so many people appreciate Terraform

  • Declarative nature. A declarative tool allows users to specify the end state and the IaC tools will automatically carry out the necessary steps to achieve the user configuration. It is in contrast to other imperative IaC tools where users need to define the exact steps required to achieve the desired state.
  • Platform agnostics. Most IaC tools like AWS CloudFormation and Azure Resource templates are platform specific. Yet, Terraform allows users to use a single tool to manage infrastructure across platforms with applications using many tools, platforms, and multi-cloud architectures.
  • Reusable configurations. Terraform encourages the creation of reusable configurations where users can use the same configuration to provision multiple environments. Additionally Terraform allows creating reusable components within the configuration files with modules.
  • Managed state. With state files keeping track of all the changes in the environment, all modifications are recorded and any unnecessary changes will not occur unless explicitly specified by the user. It can be further automated to detect any config drifts and automatically fix the drift to ensure the desired state is met at all times.
  • Easy rollsbacks. As all configurations are version controlled and the state is managed, users can easily and safely roll back most infrastructure configurations without complicated reconfigurations.
  • Integration to CI/CD. While IaC can be integrated into any pipeline, Terraform provides a simple three-step workflow that can be easily integrated into any CI/CD pipeline. It helps to completely automate the infrastructure management.

(Learn how to set up a CI/CD pipeline.)

How to use Terraform

You can start using Terraform by simply installing it in your local environment. Terraform supports Windows, Linux, and macOS environments. It provides users the option to install manually using a pre-compiled binary, or use a package manager like Homebrew on Mac, Chocolatey on Windows, Apt/Yum on Linux. It offers users the flexibility to install Terraform in their environments and integrate it into their workflows.

HashiCorp also provides a managed solution called Terraform Cloud. It provides users with a platform to manage infrastructure on all supported providers without the hassle of installing or managing Terraform itself. Terraform Cloud consists of features like;

  • Remote encrypted state storage
  • Direct CI/CD integrations
  • Fully remote and SOC2 compliant collaborative environment
  • Version Controls
  • Private Registry to store module and Policy as Code support to configure security and compliance policies
  • Complete auditable environment.
  • Cost estimations before applying infrastructure changes in supported providers.

Additionally, Terraform Cloud is deeply integrated with other HasiCrop Cloud Platform services like Vault, Consul, and Packer to manage secrets, provide service mesh and create images. All these things allow users to manage their entire infrastructure using the HasiCorp platform.

Using Terraform to provision resources

Finally, let’s look at a simple Terraform configuration. Assume you want to deploy a web server instance in your AWS environment. It can be done by creating an HCL configuration similar to the following.

terraform {

required_providers {

aws = {
source  = "hashicorp/aws"
version = "~> 3.74"
}
}
}
# Specifiy the Provider
provider "aws" {
region  = var.region
# AWS Credentials
access_key = "xxxxxxxxxxxxx"
secret_key = "yyyyyyyyyyyyy"
default_tags {
tags = {
Env            = "web-server"
Resource_Group = "ec2-instances"
}
}
}

# Configure the Security Group

resource "aws_security_group" "web_server_access" {
name        = "server-access-control-sg"
description = "Allow Access to the Server"
vpc_id      = local.ftp_vpc_id
ingress {

from_port        = 22
to_port          = 22
protocol         = "tcp"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {

from_port        = 443
to_port          = 443
protocol         = "tcp"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port        = 0
to_port          = 0
protocol         = "-1"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]

}

tags = {
Name     = "server-access-control-sg"
}
}

# Get the latest Ubuntu AMI

data "aws_ami" "ubuntu" {
most_recent = true
owners      = ["099720109477"] # Canonical
filter {
name   = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {

name   = "virtualization-type"
values = ["hvm"]
}
}

# Elastic IP

resource "aws_eip" "web_server_eip" {
instance = aws_instance.web_server.id
vpc      = true
tags = {
Name     = "web-server-eip"
Imported = false
}
}

# Web Server Instance

resource "aws_instance" "web_server" {

ami                         = data.aws_ami.ubuntu.id
instance_type               = "t3a.small"
availability_zone           = "eu-central-1a"
subnet_id                   = "subnet-yyyyyy"
associate_public_ip_address = false
vpc_security_group_ids      = "sg-xxxxxxx"
key_name                    = "frankfurt-test-servers-common"
disable_api_termination     = true
monitoring                  = true
credit_specification {
cpu_credits = "standard"
}

root_block_device {
volume_size = 30
}

tags = {
Name     = "web-server"
}
}

In the HCL file, we are pointing to the AWS provider and providing the AWS credentials (Access Key and Secret Key) which will be used to communicate with AWS and provision resources.

We have created a security group, elastic IP, and ec2 instance with the necessary configuration options to obtain the desired state in the configuration itself. Additionally, the AMI used for the ec2 instance is also queried by the configuration itself by looking for the latest Ubuntu image. Its easily understandable syntax pattern allows users to easily define their desired configurations using HCL and execute them via Terraform. You can have an in-depth look at all the available options for the AWS provider in the Terraform documentation.

Terraform summary

Terraform is a powerful IaC tool that aims to provide the best balance between user friendliness and features. Its declarative and platform-agnostic nature allows this tool to be used in any supported environment without being vendor-locked or having to learn new platform-specific tools. The flexible workflow and configuration options of Terraform allow it to be run in local environments.

Furthermore, users have the flexibility to select the exact implementation suited for their needs to manage Terraform Cloud solutions. All this has led Terraform to become one of the leading IaC tools.

Related reading

]]>
Product Owner vs Product Manager vs Scrum Master: What’s The Difference? https://www.bmc.com/blogs/product-owner-product-manager-scrum-master/ Wed, 09 Mar 2022 14:09:22 +0000 https://www.bmc.com/blogs/?p=51840 As with all project and product methodologies and frameworks, Agile and Scrum are creating new roles within organizations. Three of the more prominent roles are Product Owner (PO) Product Manager (PM) Scrum Master All three roles are higher-level jobs and come with an attractive salary. People are frequently confused about the differences between Product Owner, […]]]>

As with all project and product methodologies and frameworks, Agile and Scrum are creating new roles within organizations. Three of the more prominent roles are

  • Product Owner (PO)
  • Product Manager (PM)
  • Scrum Master

All three roles are higher-level jobs and come with an attractive salary.

People are frequently confused about the differences between Product Owner, Project Manager, and Scrum Master in an Agile environment. This articles aims to answer that question.

Product Managers, Product Owners, and Scrum Masters: Complimentary Roles

Product Managers, Product Owners, and Scrum Masters are separate roles on an Agile team, and in the Product Manager’s case, outside the Agile team. Each role has its own part to play and can generally be distinguished by these characteristics:

Product Manager Product Owner Scrum Master
Management role focused on identifying customer needs and the business objectives that a feature or product will fulfill Management role focused on building and refining the product Also referred to as Team Lead, focused on helping the scrum team perform at their highest level
Focuses on strategic vision, product strategy, customers, markets, and company objectives Focuses on tactics and operations. Supports internal teams, especially product engineering Focuses on facilitating team coordination, supporting project processes, protecting the team, coaching team members, and communication with Product Owners and the organization
High-level vision and product management, including positioning, marketing, sales support, customer care, and supporting product delivery Translates high-level vision into actionable tasks, creates detailed requirements and user stories, manages product backlog, and determines what should be built next Facilitates team coordination, ensures actionable tasks are performed accordingly
Oversees entire product lifecycle, works with business case and product roadmap Manages sprints and participates in retrospectives Responsible for the team following Agile practices and processes, and supporting project processes

We’ll go into more specific job, skill, and salary information later.

Product Managers can exist anywhere, anytime. Product Owners and Scrum Masters, however, are specific roles in the Scrum Framework.

Product Owners & Scrum Masters are specifically tied to Scrum

Scrum is an Agile Development system that:

  • Focuses on goals, small and large
  • Takes place in 1- or 2-week-long product development periods, known as sprints
  • Often uses Kanban boards to create and organize tasks

Because Scrum is a specific system, it has particular roles. The roles on the team are:

  • Product Owners
  • Scrum Masters
  • Developers

In the Agile mindset, the Scrum team is meant to be self-organized, and all team members are responsible for getting the work done. The Product Owner and Scrum Master are critical parts of developing product capability through using Scrum.

When Scrum teams do not exist, the Product Owner and Scrum Master identities fade away. Many of the tasks performed by these roles may be absorbed in over-arching Product Management roles or by Assistant PMs.

Product Manager: jobs, skills, salary

The scope of a Product Management role varies depending on the stage of the company, the maturity of the Product Management team, and other factors including job location. At its most mature, the PM is primarily responsible for:

  • Talking to users
  • Organizing strategic path of product
  • Creating product development timelines
  • Communicating between engineering and business teams

When a product is in its initial stages, or the team is in its infancy, the Product Management team can be found wearing mile-high hats, participating in everything from UX Designing, backend engineering, and design budgeting, along with all the customer communications that are required.

PMs tend to make good money. In the U.S., the average annual salary for a Product Manager is about $107,000 estimated base pay, according to Glassdoor. However, Glassdoor reports an extremely wide salary range for PMs of between $52K and $276K, with possible salary ranges up to $600K.

Specific PM salaries are dependent on what industry the PM works in (tech PMs seem to average in the $130K range and up) and other factors such as company and location. Your mileage may vary, so research the average pay in the industries and companies you would like to work for.

Product Owner: jobs, skills, salary

Where a Product Manager might wear several hats, Product Owner responsibilities in Scrum become very narrow. Like a second basemen on a baseball field, the Product Owner has an extremely specific piece of land to cover with specific people to speak to.

Scrum utilizes a system of tasks and keeps score, often with the help of a product management tool like the Kanban chart or even a simple Excel file. Throughout the sprint, engineers will claim tasks. The role of the Product Owner is to organize and prioritize the tasks for the engineers.

  • If there are no tasks, the engineers wait around for the ball to get hit to them.
  • If the tasks aren’t prioritized, the engineers develop features that aren’t crucial to the product’s mission.

Generally, the engineers, like the computer systems humankind develops, will build whatever task is on that task list, regardless of its direct impact on the end-product. Thus, it is especially important for the PO to maintain a good list, or those engineers might end up building a whole different product than what the company claims to sell.

Product Owners must do several things to maintain the Scrum backlog. The PO’s primary responsibilities are:

  • Translate PMs’ vision to actionable tasks
  • Determine day-to-day tasks
  • Write user stories for development team
  • Prioritize work in the Scrum backlog

Like PMs, Product Owners earn a solid salary. Glassdoor’s research indicates an average U.S. salary of just under $101,000 estimated total base pay for POs. Like PMs, POs have an extremely wide salary range of between $38K and $389K on Glassdoor. Also, like PMs, your PO salary mileage may vary so be sure to compare salaries for target industries and locations.

Scrum Master: jobs, skills, salary

Per ScrumAlliance.org, a Scrum Master helps the Scrum team perform at their highest levels. They protect the team from internal and external distractions so that all project members—especially the development team—can focus on their work.

Scrum Masters facilitate team coordination and support project processes by performing the following roles:

  • Ensuring actionable tasks designated by the Product Owner are performed accordingly
  • Communicating between team members about evolving planning and requirements
  • Facilitating daily Scrum and Sprint Initiatives and other Scrum events
  • Conducting meetings
  • Managing administrative tasks
  • Eliminating external and internal project hurdles
  • Other items to help the team perform at their highest level

Scrum Masters also coach team members on delivering results. They are responsible for ensuring that team members understand, execute, and follow Agile principles, processes, and practices throughout the project.

Finally, a Scrum Master communicates with the Product Owner and others within the organization for effectively implementing the Scrum Framework during the project.

Scrum Masters also earn a solid salary. Glassdoor Research specifies an average U.S. salary around $111,000 total base pay for a Scrum Master, with the most likely pay range between $27K and $537K. But like PMs and POs, this is an average that may vary significantly by industry, company, and locations.

PM, PO, and Scrum Master certifications

There are ways to set yourself apart from the crowd by getting a certification in one of these areas. These certifications indicate your specialty and experience, so you can often expect to command a higher salary.

  • The Product Manager can take one of a number of tests for certification
  • The larger umbrella of Agile Development certifications will teach both PM and PO roles and responsibilities. Among them, the PMI-ACP is the top certification that acts as a catch-all for agile development roles. You may also want to consider obtaining the Scrum Alliance Certified Product Owner (CSPO).
  • ScrumAlliance.org offers several certifications on both a Scrum Master track and on a Product Owner track. Each track offers Certified, Advanced Certified, and Certified Professional certificates.

Product Managers, Product Owners, & Scrum Masters: The outlook is good

With companies across all sectors and geographies adopting agile product development or blending it with traditional project management—such as the predictive, agile, and hybrid approaches now included with Project Management Institute (PMI) Project Management Professional certification—it’s likely that Product Managers, Product Owners and Scrum Masters will be around for a long while.

As technology continues its expansion, more people will be needed to explain innovative concepts and applications to the business. Great PMs, POs, and Scrum Masters will continue to contribute to companies’ ongoing innovation efforts in order to stay ahead of competition.

Related reading

]]>
System Administrator vs Security Administrator: What’s the Difference? https://www.bmc.com/blogs/system-administrator-vs-security-administrator-whats-the-difference/ Fri, 04 Mar 2022 00:00:56 +0000 http://www.bmc.com/blogs/?p=11381 To those who are not immersed in the complex and constantly evolving world of IT, many of the roles filled by tech experts might appear to be the same. However, as these positions become increasingly relevant for companies, it is crucial to understand the difference between jobs and why they might be needed. In this […]]]>

To those who are not immersed in the complex and constantly evolving world of IT, many of the roles filled by tech experts might appear to be the same. However, as these positions become increasingly relevant for companies, it is crucial to understand the difference between jobs and why they might be needed.

In this article, we’re talking about the roles and responsibilities of system administrators and security administrators. Though the names and jobs are similar, there are distinct differences in these IT-focused administrator roles.

Terminology

System administrator is often shortened to the buzzy title of sysadmin. More formally, some companies refer to their sysadmin as a network and computer systems administrator.

A security administrator, on the other hand, can have several names, including security specialist, network security engineer, and information security analyst.

As always, the job title is less important than the specific roles and responsibilities that a company may expect from the position.

What’s a system administrator (sysadmin)?

Computer networks are crucial to business, and they require a dedicated employee or several employees to manage the day-to-day operations of the network. That’s where system administrators come in.

A system administrator—often shortened to sysadmin—is an IT professional who supports the computing environment of a company and ensures the continuous and optimal performance of its IT services and support systems. Sysadmins are essentially in charge of “keeping the lights on” for the organization, in turn limiting work disruptions.

Roles & responsibilities

As their responsibilities focus on daily network operations, system administrators are charged with a wide swath of computer work: organizing, installing, and supporting the computer systems, which can include local area networks (LANs), wide area networks (WANs), network segments, intranets, and other data communication systems.

Several metrics—like uptime, performance, resources, and security—can help a sysadmin determine that the system meets the users’ needs within the company’s budget.

Responsibilities of a system administrator may include:

  • Anticipating needs of the network and computer systems before setting it up
  • Installing network hardware and software
  • Ensuring and implementing upgrades and repairs in a timely manner
  • Maintaining network and computer system security
  • Understanding and solving problems as automated alerts occur
  • Collecting data to help evaluate and optimize performance
  • Adding and assigning users and network permissions, as determined by the organization
  • Training users in proper use of hardware and software

Peers & reporting

A system administrator likely reports to an IT department head.

Unlike some IT positions, sysadmins have a unique responsibility to communicate and problem solve with colleagues both within and beyond the IT team. Because a sysadmin solves problems for and trains all users, including non-IT employees, communication is imperative.

System administrator job skills

In terms of skills needed, sysadmins need to know a little bit of everything. Beyond formal education, strong system administrators will need to possess several vital skills, including analytical, communication, multitasking, and problem-solving skills.

System administrators must also have skills such as:

Education & requirements

Some businesses may require that a system administrator hold a BS in a computer-related field, though some companies may only require a post-secondary degree.

Specific training and certifications alongside hands-on experience can strengthen a candidate’s position, especially when he or she hasn’t earned a BS. Common training and certifications for system administrators are offered by Microsoft and Cisco, including the Microsoft Certified: Azure Administrator Associate and the Cisco Certified Network Associate (CCNA) certification.

Outlook

The Bureau of Labor Statistics (BLS) projects that the employment of system administrators will grow by 5% by 2030–a rate that is slower than the average growth rate across all national occupations.

Despite this limited growth, it is still estimated that there will be close to 25,000 job openings a year for network and computer systems administrators. It is also projected that the demand for IT workers should continue to grow as companies invest in newer, faster, and more advanced technology.

The median annual wage for network and computer systems administrators was $84,810 in May 2020.

Top Paying IT Certifications

What is a security administrator?

The information stored within computers and infrastructure is crucial to business. In turn, security is of the utmost importance—particularly today, when individuals and sovereign nations threaten cybersecurity attacks. Security administrators are employees who test, protect, and ensure the hardware, software, and the data within the computer networks, is secure.

A security administrator is the lead point person for the cybersecurity team. They are typically responsible for the entire system and ensure that it is defended as a whole. They will often install, administer, and troubleshoot an organization’s security solutions, and then make certain it is kept secure from any type of outside, or inside, threat.

Roles & responsibilities

Where a system administrator knows a lot about many sectors of IT, a security administrator specializes in the security of the computers and networks.

In general, computer security, also known as IT security or cyber security, includes protecting computer systems and networks from the theft and/or damage to hardware, software, or information. It also includes preventing disruption or misdirection of these services. This should include knowledge of specific security devices, like firewalls, Bluetooth, Wi-Fi, and the IoT. This also includes general security measures and an ability to stay abreast of new security sector developments.

Specific roles and responsibilities of a security administrator may include:

  • Monitoring networks for security breaches, investing violations as occurs
  • Developing and supporting organizational security standards, best practices, preventative measures, and disaster recovery plans
  • Conducting penetration tests (simulating cyberattacks to find vulnerabilities before others can find them)
  • Reporting on security breaches to users, as necessary, and to upper management
  • Implementing and updating software to protect information
  • Staying up to date on IT security trends and information
  • Recommending security enhancements to management and C-suite executives

Peers & reporting

Due to the necessity of network and data security, security administrators often report directly to upper management, which could be a CIO or CTO.

Security administrators frequently partner with sysadmins for implementing new changes to the network for security purposes.

Education & requirements

At a minimum, security administrators are expected to hold a BS in computer science, programming, or a similar field. Some companies prefer to hire candidates that hold a MS in computer systems or an MBA in information systems.

In addition, companies frequently prefer candidates who are certified in specific security fields. A common certificate is the Certified Information Systems Security Professional (CISSP), offered by the International Information Systems Security Certification Consortium (ISC)². The CISSP is one of the most sought-after cybersecurity certifications and it is designed to prove the candidate’s deep expertise in the field.

Other top cybersecurity certifications focus on more specific areas, such as systems auditing or penetration testing.

Security administrator job skills

Work skills are just as important as formal education for the role of a security administrator. Candidates should be detail-oriented and analytical, as security vulnerabilities are often tiny, hard-to-notice parts of the program or network. Problem-solving and communication skills are necessary, as well, especially when training or helping non-IT colleagues.

It is also important for security administrators to have:

  • Strong leadership capabilities
  • Technical expertise and experience with the ability to develop a security plan, coordinate and implement it, and monitor the IT environment
  • A dedication to a collaborative approach and mindset
  • An understanding of regulatory standards and how to ensure the business achieves compliance

Outlook

The BLS anticipates a significant growth in the security administrator role, predicting employment will expand by 33% by 2030. This growth rate is much faster than the average for all occupations nationwide, which are currently sitting at a growth rate of 8% from 2020 to 2030.

As our economy relies more on hardware, software, and information, the need to protect them grows exponentially. With this, the need for security analysts and administrators will continue to be extremely high. As cyberattacks grow in frequency and complexity, it will be crucial that these professionals be able to come up with innovative and effective solutions.

The median annual wage for information security analysts was $103,590 in May 2020.

Security vs system administrators: Both critical

Whether you are a company looking for assistance with IT and security, or you are looking for a new role, understanding the difference between system administrators and security administrators can be an important factor in ensuring all company needs are being met. The future of these jobs is secure, and the need for strong IT professionals will only continue to grow.

Related reading

]]>
IT Governance: An Introduction https://www.bmc.com/blogs/it-governance/ Fri, 04 Mar 2022 00:00:08 +0000 https://www.bmc.com/blogs/?p=15072 Nearly all organizations are significantly dependent on technology. Even the smallest of enterprises will probably require a computer or mobile phone for communication, tracking of transactions, research, or accessing government services. For most corporate entities, their strategies are heavily linked to exploiting emerging technologies through digital transformation. According to IBM’s research, executives rank technology as […]]]>

Nearly all organizations are significantly dependent on technology. Even the smallest of enterprises will probably require a computer or mobile phone for communication, tracking of transactions, research, or accessing government services.

For most corporate entities, their strategies are heavily linked to exploiting emerging technologies through digital transformation. According to IBM’s research, executives rank technology as the top external force in 2022 that will impact their businesses in the near term, when compared with regulatory concerns and market factors. The top technologies they expect to deliver business results are:

The importance and dependence on technology means that organizations need to carefully ponder their investment in it as well as the risks that result from its use, including underutilization or misuse. Decisions regarding IT spend are no longer relegated to IT practitioners, but nowadays involve the highest levels of leadership.

That is where governance comes in—especially for entities which are heavily dependent on technology to achieve business objectives and are wary of the negative effects that could result from IT failures or misuse, such as loss of business and customers, negative reputation, and/or regulatory penalties.

Let’s take a deep dive into what IT governance is and how organizations can leverage governance to make a return on their investments in technology as well as limit its harmful impacts.

What is IT Governance?

The ISO/IEC 38500:2015 standard for the governance of IT for the organization defines IT governance as the system by which the current and future use of IT is directed and controlled.

Governance facilitates effective and prudent management of IT resources that facilitates long-term business success. IT governance is usually a subset of overall corporate governance, and as a result there is usually significant alignment between the two. The work of IT governance can be grouped into three activities according to COBIT:

Governance

  • Evaluating stakeholder needs, conditions and options to determine balanced, agreed-on enterprise objectives. This would include review of past business performance, future imperatives, as well as current and future operating model and environment. Assessments such as SWOT analysis, PESTEL analysis and risk assessments are important inputs of evaluation.
  • Directing the organization through prioritization and decision making. This is usually in the form of strategies and policies, as well as establishment of controls.
  • Monitoring performance and compliance against agreed-on direction, regulations and objectives. This is usually carried out through compliance audits and performance reports.

In most organizations, corporate governance is the responsibility of the board of directors, but specific governance responsibilities may be delegated to specific structures at an appropriate level, especially for large complex entities. An IT governance body might be a subset of the board with some depth of IT knowledge, or a group of senior executives (drawn from both business and IT) directly overseeing funding, management, and usage of IT.

The ITIL 4 Direct Plan and Improve guidance provides examples of key governance roles and their responsibilities:

Governance structure Role in governance
Board of directors Responsible for their organization’s governance. Specific responsibilities include:

  • Setting strategic objectives
  • Providing the leadership to implement strategy
  • Supervising management
  • Reporting to shareholders
Shareholders Responsible for appointing directors and auditors to ensure effective governance
Audit committee Responsible for supporting the board of directors by providing an independent assessment of management performance and conformance

Good vs bad governance

Governance is a function of human behavior. So, when it comes to good vs bad governance, the outcome is tied to two things:

  • Whether the governance body does its job responsibly and effectively.
  • Whether the stakeholders (i.e., management, employees, contractors or partners) are committed to upholding the governance framework.

Where the governance body is not knowledgeable or fully committed, there is a possibility that management ends up steering IT in a direction that may later harm the organization. Case in point is the abuse of user personal information or introduction of bias in machine learning by some organizations, which have resulted in severe regulatory penalties and reputational damage, translating into financial loss.

Bad IT governance can be characterized by the following signs:

  • The IT function makes all the decisions on the direction of technology without oversight or input from the rest of the business.
  • IT budget spend frequently spirals out of controls with unending or stalled projects that do not provide the expected benefits to the organization.
  • The governance body is reactive in nature, only called into action when things go wrong such as major IT system failures, negative audit findings, or regulatory issues.
  • IT objectives are not aligned with the organization’s strategic objectives.

Good IT governance takes a holistic approach, ensuring that all stakeholders are involved and committed to putting in place all the necessary elements required to build and sustain an effective governance framework. COBIT gives a list of such components including: processes, organizational structures, policies and procedures, information flows, culture and behaviors, skills, and infrastructure.

Best practices in governance

The ISO/IEC 38500:2015 standard defines six principles that are necessary for effective governance of IT in the organization:

  1. Responsibility. Everyone within the organization understands and accepts their responsibilities both in terms of demand and supply of IT and have the authority to meet them.
  2. Strategy. Business strategy take into account current and future IT capabilities, and the plans for use of IT support current and on-going business strategy.
  3. Acquisition. All IT investments are made with valid reasons, on the basis of relevant analysis and transparent decision making, with and appropriate balance between benefits, costs, and risks to the organization.
  4. Performance. IT is fit for purpose, providing services that meet the business requirements in terms of quality and service levels.
  5. Conformance. The use of IT systems complies with all applicable legislation and regulations, as well organizational policies and practices which should be well defined, implemented and enforced.
  6. Human behavior. Respect for human behavior is demonstrated in IT policies, practices, and decisions, even as needs evolve among all stakeholders.

Additional principles as defined by COBIT are that the IT governance system should:

  • Satisfy stakeholder needs and generate value from the use of information and technology.
  • Be built from a number of components that can be of different types and that work together in a holistic way.
  • Be dynamic, always considering the effect of changes to any of its design factors.
  • Clearly distinguish between governance and management activities and structures.
  • Be tailored to the enterprise’s needs, using a set of design factors as parameters to customize and prioritize its components.
  • Cover the enterprise end to end, focusing on all technology and information processing the enterprise puts in place to achieve its goals, including outsourced processing.

Related reading

]]>
Transforming Experience through Intelligent Service Assurance https://www.bmc.com/blogs/transforming-experience-through-intelligent-service-assurance/ Thu, 03 Mar 2022 16:09:05 +0000 https://www.bmc.com/blogs/?p=51797 As communications service providers (CSPs) transform their business by modernizing their services and expanding their offerings, they face an already highly saturated market and ever-shrinking margins. Competitive pressure is also increasing—from traditional companies investing heavily in capital-intensive innovations like 5G and fiber to the x (FTT) to grow their market share and revenue and non-traditional […]]]>

As communications service providers (CSPs) transform their business by modernizing their services and expanding their offerings, they face an already highly saturated market and ever-shrinking margins. Competitive pressure is also increasing—from traditional companies investing heavily in capital-intensive innovations like 5G and fiber to the x (FTT) to grow their market share and revenue and non-traditional entrants, hyperscalers, and other digital-native businesses disrupting business with innovations like private 5G networks, edge computing, and network as a service (NaaS). All told, the changes are driving CSPs to review both business models and determine which compelling new services they need to succeed.

Modernizing service assurance

Current service systems—many of which have been highly customized and still rely on manual processes—cannot adapt to the fast-evolving, newer market demands and are straining to meet the scalability and performance required for success. Newer services also rely on a mesh approach, so IT service management (ITSM) must not only be proactive in resolving issues but also capable of performing seamless switchovers to avoid performance slowdowns and outages when issues do occur.

The shift toward adopting customer- and service-centric IT operations (ITOps) is hampered by the current technology-centric model and traditional service assurance solutions. The time for a modern service assurance solution is now. Introducing BMC Helix for CSP, an intelligent, CSP-specific solution that leverages the power of the BMC Helix Platform to drive automation and customer centricity across service assurance operations.

Phil Brooks, executive consultant and CEO at ANS Digital Transformation, led the team that collaborated with BMC to help design BMC Helix for CSP. “Automation and customer centricity remain a vision and ambition for all CSPs. BMC Helix for CSP is a giant step in achieving that vision,” he shares. “The evolution of network infrastructure to become more IP-based and the advent of network functions virtualization (NFV), software-defined networking (SDN), and cloud is driving convergence of network and IT operations. BMC customers can now leverage and consolidate all their service management solutions onto the single BMC Helix Platform to support these new operating models.”

BMC Helix for CSP

Industry-leading, real-world experience collated from multiple design partners, including ANS and major CSP organizations, ensures that BMC Helix for CSP captures the very best way of working, including business-critical flexibility, hyperscaling, performance, and security. The adoption of CSP industry standards for interoperability and service modeling is also key to supporting end-to-end process automation and providing critical real-time insights.

BMC Helix for CSP focuses on four areas:

  • Service assurance
  • Service quality management
  • “Zero-touch” network operations management
  • TM Forum certification

Service assurance is the group of processes and capabilities that allow a CSP to quickly identify and resolve network and performance issues before they impact customer services, and it’s moving toward a closed-loop, lights-out model. “Traditional service assurance approaches have been resource-focused, break-fix oriented operations with limited visibility [that prioritizes] interrupted customer services. This has changed dramatically in recent years with the increased criticality of network services and customer choice,” says Brooks. “The move from manual activities to artificial intelligence and machine learning (AI/ML)-driven processes is still very much in its infancy but enabled through BMC Helix for CSP.”

With BMC Helix for CSP, you can monitor, optimize, and comprehensively operate your worldwide infrastructure seamlessly through a “zero-touch” network operations command center and single-pane-of-glass to support fully automated, closed-loop, and “headless” operations. “Zero touch is the ultimate goal of CSPs to support critical services and compete in a highly commoditized market,” adds Brooks. “Those that can achieve it will be the winners, beating the competition and fully monetizing their network infrastructure assets. BMC Helix for CSP provides the platform to achieve this goal.”

“The CSP transformation will be a journey. Metrics such as Net Promoter Score (NPS) and quality of service (QoS) that monitor improvements to the customer experience are fundamental measurements of this journey. BMC Helix for CSP is equipped to provide key analytical data to support this analysis [through interactive dashboards and service analytics that deliver valuable insights].”

According to Brooks, API certifications within BMC Helix for CSP enable the essential ecosystem integrations required for CSP market success. “Data and interoperability are at the core of the BMC Helix for CSP design philosophy,” he explains. “This has been enabled through the adoption of TM Forum industry standards to support end-to-end service assurance process automation and provide actionable insights.”

Conclusion

Optimized for complex and ever-changing worldwide network infrastructure challenges, BMC Helix for CSP provides the comprehensive, critical intelligent service assurance capabilities that CSPs require to deliver the differentiated services vital to their customers and their business. Learn more at bmc.com/csp.

]]>
What Does a System Architect Do? https://www.bmc.com/blogs/system-architect/ Wed, 02 Mar 2022 00:00:03 +0000 https://www.bmc.com/blogs/?p=12588 In the ever-evolving world of IT, having strong systems and networks is crucial. Companies that are able to create goals surrounding their systems are sure to see growth, but they can’t stop there. It is necessary to have someone in charge of not only ensuring that these goals are met in terms of technology, but […]]]>

In the ever-evolving world of IT, having strong systems and networks is crucial. Companies that are able to create goals surrounding their systems are sure to see growth, but they can’t stop there. It is necessary to have someone in charge of not only ensuring that these goals are met in terms of technology, but also that the technology is:

  • Designed correctly
  • Deployed efficiently
  • Maintained across its lifecycle

Enter in: the system architect.

While professionals in this role might have more freedom in their overall daily job functions, there is still a general set of responsibilities required of them. We have put together some information to explain what a system architect does.

System Architect Roles and Responsibilities

What is a system architect?

A system architect is in charge of devising, configuring, operating, and maintaining both computer and networking systems. They objectively analyze desired processes and outcomes and advise on the right combination of IT systems and components to achieve specific business, department, team, or functional goals.

System architecture is closely aligned with service design.

Similar to how civil engineers need to have a complete understanding of bridges and everything they encompass, system architects must be highly proficient in understanding:

  • How much stress computer systems can take
  • How they need to be used
  • What is needed for the system designs to hold up

Levels of system architects

The system architect works at several different levels in IT, from high-level business strategy to low-level project consulting.

  • At the highest level, system architects help to define and decide on the right IT strategy and approach that will best support long-term business plans and goals.
  • At the medium level, system architects advise on the best tools, frameworks, hardware, software, and other IT elements to achieve mid-term departmental and functional objectives.
  • At the lowest level, system architects consult with and advise project teams on the specific software, hardware, and other elements needed to deliver defined IT project outcomes.

System architects are business and technology experts. They look at business plans and goals, analyze technical solutions, and create recommendations on the right mix of IT elements to achieve those objectives.

The roles of a system architect

A system architect role can be split into five areas:

  1. Understand the desired business or departmental strategy and outcome.
  2. Break down those outcomes into defined parts including products, processes, and functions.
  3. Decide on the right architecture to achieve what they have defined.
  4. Understand software, hardware, and user interactions, integrations, and interfaces.
  5. Advise project teams on implementing their recommended solutions.

System architects are often senior engineers and strategists and work with stakeholders throughout IT and the business as a whole. They must absorb large amounts of information, analyze it for key factors, and provide clear, easily implementable recommendations.

Let’s explore the key parts of a system architect’s role:

Understand the desired business or departmental strategy and outcome

IT is a crucial component of almost every business process. When the business wants to launch new products, improve efficiencies, or gain a competitive advantage, this will be captured in the strategy.

A system architect will analyze business strategy and discuss all key areas and initiatives with business strategists and high-level managers. They will translate those requirements into a demand for new or enhanced IT capabilities over the short-, medium-, and long-term.

Break down outcomes into defined parts

Once the system architect understands business and departmental demands, they will analyze and understand what specific IT capabilities will be needed. They will define this in system architecture documents for each major initiative. This becomes an important reference document to ensure consistency and clarity across all project and IT implementations.

Documentation may include:

  • The name, purpose, and outcome of the initiative
  • The main features, functionality, and processes for the initiative
  • Overall IT methodology and frameworks impacting the initiative
  • Key existing infrastructure and applications
  • New staffing or resource requirements
  • Ideas for potential software and hardware solutions

Decide on the right IT architecture

When the business decides to implement an initiative, the system architect will build out the planned IT architecture model. They will recommend specific IT hardware, software, methodologies, and approaches to help the business achieve the desired outcome.

A system architect takes the following areas into account:

  • Alignment with overall goals
  • Specific business requirements
  • The existing IT ecosystem
  • New and established technologies
  • IT resources and staffing
  • Cost control and return on investment
  • End user and customer needs and experience
  • Availability, responsiveness, reliability, and resilience of critical elements
  • Alignment with architecture standards and best practice
  • IT service management and support

Understand integrations, interfaces & interactions

A system architect doesn’t just focus on IT elements in isolation. They also look at integrations with existing systems, interfaces with people and other applications, and how users will interact with the deliverable. UI and UX is becoming an increasingly important part of the system architect role, as well.

Advise project teams on recommended solutions

System architects work closely with project teams to help them turn the architecture and their vision into reality. They can advise on design and build, testing, and implementation. Feedback from engineers and end users will feedback into system design to ensure it aligns with both business goals and user needs.

Systems architect skills

While the specific skills needed for the role will vary according to the company and industry, there are some general skills needed to be a successful system architect:

  • Experience with computer servers, network switches, load balancers, network analyzers, and network channel or data service units
  • Knowledge of developing strategic system architecture plans
  • Solid understanding of network and system development and deployment
  • Strong analytical, problem-solving, and conceptual abilities
  • Excellent verbal and written communication skills
  • Experience with information processing fundamentals and best practices
  • Ability to prioritize tasks, especially when under pressure
  • Above-average leadership and collaboration abilities

Job outlook for the system architect

According to the Bureau of Labor Statistics (BLS), the job outlook for a systems architect is projected to grow about 5% from 2020 to 2030, a rate that is slower than the average for all occupations. Despite this, there is estimated to be over 11,000 job openings per year for the position.

Looking at the next decade, the need for this job will not go away. System architects hold an important position with the IT department, and overall company, and their skills are critical to the success of the organization’s infrastructure.

The system architect role is vital to the successful definition, design, delivery, and support of any IT project. Whether an organization is looking to create new systems, or is in the process of strengthening and growing already existing ones, having a qualified system architect on the team will make all the difference.

Related reading

]]>
Intelligent Service Assurance: A Modern Way for CSPs to Achieve Service Excellence https://www.bmc.com/blogs/intelligent-service-assurance/ Tue, 01 Mar 2022 10:56:01 +0000 https://www.bmc.com/blogs/?p=51775 5G is expected to create significant new business opportunities for communications service providers (CSPs). The demand for broadband mobile data services that significantly increased during the pandemic has accelerated 5G deployments. As a result, 5G has become the fastest-adopted mobile generation. In fact, according to Telecoms.com, by the end of 2021, more than 150 CSPs […]]]>

5G is expected to create significant new business opportunities for communications service providers (CSPs). The demand for broadband mobile data services that significantly increased during the pandemic has accelerated 5G deployments. As a result, 5G has become the fastest-adopted mobile generation. In fact, according to Telecoms.com, by the end of 2021, more than 150 CSPs in over 60 countries had already launched commercial 5G services (TechTarget).

However, ultra-fast mobile data access is just one facet of the “5G game.” 5G has also been designed as a single platform to create significant new revenue streams for CSPs from a variety of non-communication services such as enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), and massive machine communication (mMTC)—all enabled by 5G network slicing.

The real revolution brought about by 5G (5G standalone) and related Open radio-access network (O-RAN) standards is a cloudified core and radio access network architecture that is separating the network from its physical infrastructure. As a result, future telco networks will look more and more like contemporary IT infrastructure.

They will be software-defined and composed of standardized virtual network functions that can be flexibly deployed on distributed private, hybrid, or public cloud platforms—and they’ll run on standard IT infrastructure, rather than traditional vendor-specific, proprietary hardware “boxes.”

A similar cloudification process has been ongoing for nearly a decade in the domain of telco IT. It started with the virtualization of CSPs’ datacenters, followed by the launch of private clouds. Several European, Middle Eastern, and Africa (EMEA) CSPs have already started refactoring some of their operations support and business support systems (OSS/BSS) “application silos” by migrating them to cloud-native architectures while introducing DevOps culture into their organizations.

Echoing this, traditional OSS/BSS vendors (including BMC) are also containerizing their previously monolithic applications to enable flexible deployments in private, hybrid, and public clouds. Containerized architecture will likely become one of key procurements requirements for newly deployed OSS/BSS solutions.

Due to this virtualization and cloudification, telco IT and network infrastructures will finally converge, resulting in much higher flexibility; reduced costs; improved organizational agility; and continuous service innovation. However, this process will also create new operational challenges as CSPs will have to concurrently manage and orchestrate the virtualized functions that “float” over the elastic cloud infrastructure and co-exist with the legacy infrastructure for many years to come.

For these reasons, leading CSPs are now looking for future-proof “converged” OSS solutions that will allow them to effectively and efficiently manage and operate existing IT and networks and the emerging cloudified future mesh infrastructure. Intelligent automation powered by artificial intelligence and machine learning (AI/ML) has transitioned from a “hot topic” of conversation to a business-critical requirement for CSPs.

Manual processes and traditional service assurance systems are insufficient to keep up with the speed, volume, and complexity of current CSPs, and they are not well-suited for cloudified networks, either. This makes it imperative for CSPs to completely reinvent how they approach all aspects of their services.

BMC Helix for CSP

To respond to these new requirements, BMC has launched BMC Helix for CSP, a specialized, purpose-built, TM Forum-certified intelligent service assurance solution to help modern CSPs and telcos deliver competitive and differentiated services for their highly competitive market. Now, CSPs can use one flexible and modular solution for both service assurance and service management across IT and network domains. Integrated orchestration and automation features allow CSPs to streamline their operations and drive process efficiencies, reduce costs, and speed issue resolutions.

 

Additional features include:

  • Intelligent service assurance: Easily scale to manage millions of cases with leading self-service and service desk solutions.
  • Trouble ticketing: Efficiently identify, investigate, track, and remediate network issues.
  • Work order management: Streamline management of remedial activities through automation.
  • Network operations automation: Automate the creation, enrichment, assessment, and assignment of network issues and operations for network operations centers (NOCs).
  • Dynamic service modeling: Experience 360-degree visibility of network services, resources, and interdependencies, including physical and logical network topology.
  • Service quality insights: Use interactive dashboards and service analytics for service level agreement (SLA) tracking, performance levels, and more.
  • TM Forum compliant integration: Get extensive API certifications for essential ecosystem integrations from a solution based on TM Forum information framework (SID) and business process framework (eTOM) modeling standards.

To learn more about BMC Helix for CSP, visit bmc.com/csp.

 

]]>
Data Engineer vs Data Scientist: What’s the Difference? https://www.bmc.com/blogs/data-engineer-vs-data-scientist/ Thu, 10 Feb 2022 00:00:39 +0000 https://www.bmc.com/blogs/?p=13454 The data engineer equips the business with the ability to move data from place to place, known as data pipelines. Data engineers provide data to the data science teams. The data scientist consumes data provided by the data engineers and interprets it to say something meaningful to decision-makers in the company. In this article, let’s […]]]>

The data engineer equips the business with the ability to move data from place to place, known as data pipelines. Data engineers provide data to the data science teams.

The data scientist consumes data provided by the data engineers and interprets it to say something meaningful to decision-makers in the company.

In this article, let’s dive a little deeper into the roles of data engineer and data scientist.

Data Pipeline

What is a Data Engineer?

In general, the data engineers are responsible for building pipelines, architecting the back-end databases, creating queries, and more.

Responsibilities

The data engineer will often possess a degree in computer science or engineering. Their skills involve building and working with computers directly. They build databases, queries to interact with the databases, move data from one database to another, transform the data to be sent as the right type to its end point. They are the ones who build APIs.

Data engineers will use a number of computer languages to get the job done. At their level, the best language depends on the task and the equipment they are working with. Java, Scala, C++, Go, and Python may be used.

Skills

  • Writing database queries
  • Building database pipelines
  • Building APIs
  • Coding language: Java, Scala, C++, Go, Julia, Python

What is a Data Scientist?

The data scientists might have to know only a little computer coding to ingest the data from the engineer’s sources, and to transform it to fit their needs.

Data scientists’ skillset is founded more on having good reasoning and communication skills. Their job tends to be highly mathematical and statistical. They need to be able to:

  • Create hypotheses around the data sets
  • Test the hypotheses
  • Put what they learn into communicable information to decision-makers

Responsibilities

Data scientists are responsible for consuming data from a source, and finding valuable information from it. Then, they are tasked with presenting the information often through a visualization.

Data scientists will have to:

Skills

  • Strong mathematical and statistical skills
  • Source, filter, clean, and verify data
  • Excellent ability to reason and communicate
  • Build visualizations and data dashboards

Increasingly, data scientists are adding machine learning to their skillsets, too.

(Find out why Python is the predominant coding language for big data.)

Salary comparison

Demand for both positions is high and will be around long enough to build a career around. Their pay varies with skill-level and company. Data engineers used to make more money than data scientists, but in recent years, with the ever growing supply of people entering the field, and the growing ease to work with cloud data infrastructures, both jobs have evened out to roughly the same. Even their overall salary has dropped about $30,000 in the past 5 years.

While engineers’ average salary is reported to be a little less than a data scientist’s, their distribution curve is a little flatter, meaning there are more people in the position making higher salaries.

Data Engineer avg salary

Data engineers & data scientists working together

The role each plays in the company is essential. The data engineers tend to be better programmers and have a far better grasp on moving data around—after all, that’s their sole job. The scientists will specialize more in data analytics and all the statistical and reporting strategies to extract meaningful information from data.

Data scientists have the more popular role because, in a way, they are the journalists of data, and create the reports for people to read. Thus, they become the face of data while the engineers are behind the scenes and make access to all the data possible for the data scientist’s reports.

Data scientists’ reports can also influence the data engineering team’s data collection efforts. If the analysts determine a new source of information is needed, then they can tell the engineers to build a pipeline to gain access to the information, and the engineers can go develop the pipeline to give the scientists access to that information. It may be discovered that access to the information is impossible, has security issues, cost restrictions, too many inputs or unclear definitions, and they can tell the data science team that access to the information is impossible or needs to be approached differently.

Whichever path you choose, data scientists and data engineers will be around for a long time.

Related reading

]]>
How Data Center Colocation Works https://www.bmc.com/blogs/data-center-colocation/ Fri, 04 Feb 2022 00:00:50 +0000 http://www.bmc.com/blogs/?p=11721 Data Center Colocation (aka “colo”) is a rental service for enterprise customers to store their servers and other hardware necessary for daily operations. The service offers shared, secure spaces in cool, monitored environments ideal for servers, while ensuring bandwidth needs are met. The data center will offer tiers of services that guarantee a certain amount […]]]>

Data Center Colocation (aka “colo”) is a rental service for enterprise customers to store their servers and other hardware necessary for daily operations. The service offers shared, secure spaces in cool, monitored environments ideal for servers, while ensuring bandwidth needs are met. The data center will offer tiers of services that guarantee a certain amount of uptime.

The decision to move, expand, or consolidate your data center is one that must be weighed in the context of cost, operational reliability and of course, security. With these considerations in mind, more companies are finding that colocation offers the solution they need without the hassle of managing their own data center.

Data center colocation works like renting from a landlord: Customers rent space in the center to store their hardware.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Benefits of data center colocation

Data center colocation could be the right choice for any business of any size, in any industry. Let’s look at the benefits.

Uptime

Server uptime is a big advantage enterprise businesses have in data center colocation. By buying into a specific tier, each enterprise server client is guaranteed a certain percentage of uptime without the payroll cost to maintain or other maintenance fees.

Risk management

Utilizing a colocation facility ensures business continuity in the event of natural disasters or an outage. This means that if your business location loses power, your network traffic will not be affected.

Its key to success is redundancy. The layers of redundancy offered at a data center colocation are far more complex than many companies can afford in-house.

Some enterprise companies will consider the off-site location as their primary data storage location while maintaining onsite copies of data as backup.

(Read about enterprise risk management.)

Security

Data centers are equipped with the latest in security technology including cameras and biometric readers, check-in desks that welcome inbound visitors, and checks for security badges are commonplace.

These facilities are monitored 24/7/365, both in the physical world and on the cloud to ensure that unauthorized access does not occur.

Cost

One of the main advantages of colocation is that it results in significant cost savings especially when measured against managing a data center in-house. This means that for many companies, renting the space they need from a data center offers a practical solution to ever-shrinking IT budgets. With colocation, there is no need to worry about planning for capital expenditures such as:

  • UPS (uninterrupted power sources)
  • Multiple backup generators
  • Power grids
  • HVAC units (and the ongoing cost of cooling)

Apart from these capital expenditures, there are also ongoing maintenance costs associated with maintaining and managing an in-house server.

Bandwidth

Colos issue the bandwidth that enterprise client servers need to function properly. With large pipes of bandwidth to power multiple companies, data center colocations are primed to support businesses in a way their office location likely cannot—something that’s increasingly important to remote work.

Support & certifications

Data center colocation offers the benefit of peace of mind.

When you partner with a data center colocation, your enterprise business may be able to reduce potential payroll costs by relying on the data center experts to manage and troubleshoot major pieces of equipment. Enterprise businesses can rely on expert support from experts who are certified to help.

Scalability

As your business grows, you can easily expand your IT infrastructure needs through colocation.

Different industries will have different requirements in terms of the functionalities they need from their data center as it relates to space, power, support and security. Regardless, your service provider will work with you to determine your needs and make adjustments quickly.

In-house data center vs data center colocation

While data center outsourcing offers many benefits, some enterprise organizations may still prefer to manage their own data centers for a few reasons.

Control over data

Whenever you put important equipment in someone else’s charge, you run the risk of damage to your equipment and even accidental data loss. Fortunately, data centers are set up with redundancy and other protocols to reduce the likelihood of this occurring, as discussed above.

But some enterprise businesses with the knowledge and resources to handle data in-house, feel more comfortable with being liable for their own servers.

They also benefit from being able to fix server issues immediately when they occur. Enterprise businesses who seek to outsource instead must work closely with their service providers to ensure issues are resolved in a timely manner.

Contractual constraints

Enterprise business owners may find that they are unpleasantly surprised by the limitations of the contract between their company and a colo facility. Clauses that include:

  • Vendor lock-in
  • Contract termination or nonrenewal
  • Equipment ownership

Choosing a data center

Here are eight considerations enterprise IT Directors should think about before moving their data to a co-located data facility.

  1. Is the agreement flexible to meet my needs?
  2. Does the facility support my power needs, current and future?
  3. Is the facility network carrier neutral? Or does it offer a variety of network carriers?
  4. Is it the best location for my data? Accessible? Out of the way of disaster areas?
  5. Is the security up to my standards?
  6. Is the data center certified with the Uptime Institute?
  7. Does my enterprise business have a plan for handling transitional costs?
  8. Is this data center scalable for future growth?

If an enterprise business leader can answer ‘yes’ to the above questions, it may be the right time to make the change.

Cloud services vs colocation

The cloud is another option over data center colocation:

  • A cloud services provider will manage all elements of the data: servers, storage, and network elements.
  • An enterprise’s only responsibility will be to work with their services and use it.

Cloud services are great for allowing a business to focus more on their business requirements and less on the technical requirements for warehousing their data. In this case, cloud services can be cheaper, and enable new businesses to get off the ground quicker.

More established businesses are considered to be better suited to handle their own data center needs through colo or in house means, and the costs to establish and maintain their colo will be cheaper in the long run than cloud services options.

Cloud services also allow access to quick start-up times, less technical knowledge required to get going, easily scalable (both up and down) server needs, and then integrated services with all the other options a cloud service provider might offer such as:

  • Integrated monitoring
  • Data storage and querying tools
  • Networking tools
  • Machine learning tools

(Accurately estimate the cost of your cloud migration.)

What’s next for data center colocation?

The biggest push in the industry comes from cloud service providers who use colo as a way to meet their hefty equipment storage needs. At the same time, the industry has been and will continue to remain fluid as laws change with regard to cloud storage requirements.

While soaring demand from cloud service providers has made the need for data center colocation increase, new technology offers rack storage density options that allow colo facilities to mitigate the demand for hardware space.

Related reading

]]>
AWS Certifications in 2022 https://www.bmc.com/blogs/aws-certifications/ Thu, 03 Feb 2022 00:00:19 +0000 https://www.bmc.com/blogs/?p=18444 Amazon Web Services (AWS) certifications are highly sought-after credentials in today’s environment. AWS certs provide an industry standard for demonstrating AWS cloud expertise, rigorously testing competency, and providing an accurate representation of the test-taker’s skills. This article covers several aspects of getting and maintaining an AWS certification. We’ll look at the following topics and how […]]]>

Amazon Web Services (AWS) certifications are highly sought-after credentials in today’s environment. AWS certs provide an industry standard for demonstrating AWS cloud expertise, rigorously testing competency, and providing an accurate representation of the test-taker’s skills.

This article covers several aspects of getting and maintaining an AWS certification. We’ll look at the following topics and how they relate to your AWS certification journey:

(This article is part of our AWS Guide. Use the right-hand menu to navigate.)

Available AWS certifications & categories

There are eleven active AWS certifications that you can achieve in 2022:

  • 1 foundational certification
  • 3 associate-level certifications
  • 2 professional-level certifications
  • 5 specialty certifications

Here’s a brief description of what each certification level offers and what proficiencies they certify.

Foundational Level

Cloud Practitioner is the only Foundational AWS certificate. It is ideal for candidates with at least six months of experience in any role involving the AWS cloud, such as technical, sales, purchasing, financial, or managerial positions.

This certification verifies that the candidate has an overall familiarity with the AWS Cloud and related knowledge and skills. While other certifications tie into specific technical roles such as Architect, Developer or Operations, the Cloud Practitioner certification provides a more general foundation for any career path.

Associate Level

Each of the Associate certifications typically require at least a year of previous direct experience and knowledge regarding AWS services and related technologies. The three certifications within the Associate level are:

  • Solutions Architect focuses on designing and implementing AWS distributed systems
  • SysOps Administrator focuses on deploying, managing, and operating workloads on AWS
  • Developer focuses on writing and deploying cloud-based applications

Professional Level

The highest certification category, each Professional AWS certification requires a full two years of experience, with each candidate being successful and highly capable within their respective roles. The two Professional-level certifications are:

  • Solutions Architect validates the candidate’s ability to design, deploy, and evaluate AWS applications inside complex and diverse environments
  • DevOps Engineer validates the candidate’s ability to automate the testing and deployment of AWS applications and infrastructure

Specialty certifications

Whereas the previous three levels represent the core role-based certifications that AWS offers, the Specialty certifications provide evaluations in specific technical areas. These certifications include:

  • Advanced networking
  • Security
  • Machine Learning
  • Data analytics
  • Database

Requirements vary for each specialty certification. Candidates must possess experience with AWS technology, along with 2-5 years’ experience in the specialty domain. Check each individual certification for prerequisites and requirements.

How much does AWS certification cost?

To earn an AWS certification, you’ll have to pass a test. Each exam requires an AWS testing fee, typically between $100 and $300 (visit this page for Amazon’s current pricing).

Be prepared though—the examination fee won’t be your only certification cost. You may also have to invest cash and time in test preparation, including:

  • Paid classroom or remote training
  • Course materials
  • Practice exams

But there is an upside! Amazon offers an AWS Free Tier account for trials, 12 months free access, and some free-tier services that you never need to pay for. This is valuable when studying for certification. However, if you’re studying specific certification scenarios, you may have to purchase additional services with your free account.

Benefits of an AWS Certification

AWS remains one of the top cloud service providers in the market. For good reason.

Obtaining AWS certifications demonstrates competency in AWS services. They also help candidates clearly demonstrate to potential employers exactly what skills they have, which helps you to:

  • Increase your competitiveness
  • Negotiate your salary

Many significant IT professional and management opportunities aren’t available without a related AWS certification. While great salaries aren’t guaranteed, AWS certified jobs frequently can offer salaries ranging from $90k to $160k+ USD, depending on the AWS certification category and job environment.

Of course, AWS certifications can also aid candidates in improving skills or learning new ones. Preparing for the exams through practice exercises and studying can:

  • Reinforce knowledge on key concepts
  • Correct outdated/wrong knowledge
  • Introduce you to new areas

Picking the right AWS certification

The two main factors that determine appropriate AWS certification needs are the experience level and career path desires of the candidate. If you already work in a particular field and wish to move up to higher positions, look for certifications that match your capabilities. Then, check the requirements in terms of experience and skills to determine if an Associate, Professional, or Specialty certification is the best fit.

AWS outlines several “learning paths” that can help guide candidates toward the best certifications for obtaining specific professional roles in the future.

If you’re just starting out, the Foundational Cloud Practitioner certificate can be a good choice. Explore the various learning paths to help identify specific professional goals and the best certificates to reach them.

Study & practice

There are many options for exam preparation. Useful ways to get ready for AWS certification include:

  • Taking training classes
  • Using study guides
  • Taking practice exams
  • Reading AWS whitepapers

Training classes are available through AWS, and third-party global and local AWS training partners. You can find AWS approved instruction at the AWS Classroom Training Overview web page.

Remote and (sometimes) in-person training offer the best options for learning AWS skills and certifications. They provide instructor-led training and labs, as well as practice exams, books, and exercises. Amazon also offers study guides in both ebook and physical formats.

Studying for AWS exams

Studying for any given exam is likely to require anywhere from 80 to 120 hours.

For candidates working full-time jobs, this can mean months of preparation. Start a study regimen about two or three months before the exam date with a consistent weekly schedule designed to cover all the relevant material in the given timeframe. Certification exams cover a lot of material in an Amazon-specific format so give yourself plenty of time to absorb the material.

Regarding exam practice, regularly take the certification practice tests provided with your study materials. Even if you’ve been working with the material for years, certification exam questions may contain specific terminology and phrasings that you’re unfamiliar with. Taking the practice tests help prepare you for the tone and pace of the exam.

For more details on the format, type, delivery method, time limit, costs, and available languages of each exam, check the page of your intended certification by clicking on its badge on the AWS learn about training by role or solution page.

Related reading

]]>