Austin Miller – BMC Software | Blogs https://s7280.pcdn.co Mon, 23 Oct 2023 10:54:11 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Austin Miller – BMC Software | Blogs https://s7280.pcdn.co 32 32 How Data Center Cooling Works & Can Promote Sustainability https://s7280.pcdn.co/data-center-cooling/ Tue, 01 Feb 2022 00:00:05 +0000 https://www.bmc.com/blogs/?p=13302 The backbone of every single digital service is a vast network of servers and computing resources that deliver performance and availability necessary for business operations—and hopefully continually improve the customer and end-user experience. These resources are responsible for… Performing search queries Transferring data Delivering computing services …for millions of users at any given moment, all […]]]>

The backbone of every single digital service is a vast network of servers and computing resources that deliver performance and availability necessary for business operations—and hopefully continually improve the customer and end-user experience. These resources are responsible for…

  • Performing search queries
  • Transferring data
  • Delivering computing services

…for millions of users at any given moment, all around the world.

Now take all those servers and consider the power and heat they generate. Anyone who plays games on a laptop, desktop, or gaming platform knows how hot the equipment gets. At the height of the business world, with server rooms stacked with aisles and aisles of computing machines, that problem is significant.

Keeping servers cool is important, especially for enterprise-size businesses—but it also has a significant negative impact on Planet Earth. So, in this article, let’s take a look at data center cooling and how companies can harness simple practices and modern technology to make data center cooling a more sustainable practice.

(This article is part of our Sustainable IT Guide. Use the right-hand menu to explore topics related to sustainable technology efforts.)

data-center-cooling-experiment

What is data center cooling?

Data center cooling is exactly what it sounds like: controlling the temperature inside data centers to reduce heat. Failing to manage the heat and airflow within a data center can have disastrous effects on a business. Not only is energy efficiency seriously diminished—with lots of resources spent on keeping the temperature down—but the risk of servers overheating rises rapidly.

The cooling system in a modern data center regulates several parameters in guiding the flow of heat and cooling to achieve maximum efficiency. These parameters include but aren’t limited to:

  • Temperatures
  • Cooling performance
  • Energy consumption
  • Cooling fluid flow characteristics

All of the data center cooling systems components are interconnected and impact the overall efficiency of the cooling system. No matter how you set up your data center or server room, cooling is necessary to achieve a data center that works and is available to run your business.

(Explore best practices for data center migrations & infrastructure management.)

Is data center cooling necessary?

Yes—but maybe to a lesser extent than we have long believed. The general rule of thumb has been to ensure an entire room’s ambient temperate stabilizes at 18 degrees Celsius. But this is unnecessarily low, as Sentry Software explains:

It has been proven that computer systems can operate without problems with an ambient temperature significantly higher…This is the fastest and cheapest method to reduce the energy consumed by a datacenter and improve its P.U.E. [Power Utilization Effectiveness]

The oft-cited example is Google, who successfully raised the temperature of their datacenters to 26.7°C (80° Fahrenheit). We’ll talk more about the famous Google case study later on.

Data center managers should understand how failing to implement effective cooling technology in a server room can quickly cause overheating.

Poorly configured cooling could deliver the wrong type of server cooling to your center (for example, bottom-up tile cooling technology being used with back-front configured modern servers). This, again, would lead to serious overheating, a risk that no business should accept willingly.

Benefits of data center cooling

The business benefits of data center cooling are abundantly clear.

Ensured server uptime

Proper data center cooling technologies allow servers to stay online for longer. Overheating can be disastrous in a professional environment that requires over 99.99% uptime, so any failure at the server level will have knock-on effects for your business and your customers.

Greater efficiency in the data center

Data doesn’t travel faster in cooler server rooms, but it travels a lot faster than if it was trying to travel over a crashed server!

Because data centers can quickly develop hot spots (regardless of whether the data center manager has intended a cold aisle set up or a hot aisle on), creating new solutions to cooling needs to be efficient and easily done on the fly.

This means only using liquid cooling technologies that are easily adaptable or air-cooling systems that can easily change the way cold air is used. Overall, this allows for greater efficiency when scaling up a data center.

Longer lifespan of your technology

Computers that constantly overheat at going to fail before they reach their expected end of life (EOL). For that reason, the expense of cooling systems in a data center quickly starts to pay for itself.

Introducing data center cooling technologies allows for a piece of hardware to survive longer and for a business to spend less on replacing infrastructure. Companies should be moving towards greener IT solutions, not actively creating industrial waste.

(Learn about modernizing the mainframe & your software.)

Drawbacks of data center cooling

Though data center cooling is critical for business success, it has a significant impact on the bottom line—and for Planet Earth.

Costs can be prohibitive

Data center cooling is expensive: so much energy is needed simply to run the servers and data centers. You’ll spend additional energy on reducing the heat these systems generate.

For small-sized operations, state-of-the-art cooling systems are simply not possible. Expensive HVAC systems and intricate water-cooling systems that are specifically designed for data center temperature control can cost well over what SMBs can afford.

But this does not mean that SMB data centers must fail. Simple solutions like blanking panels can be used to encourage the easy flow of cold air throughout the server room. Similarly, organized cables (i.e. not a cable nightmare for an IT technician) can allow for better airflow too.

Severe impact on the planet

As much as 50% of all power used in a data center is spent on cooling technologies. Major enterprises are all moving towards reducing their carbon footprint, which means cooling technologies either have to change—or need to go.

According to the Global e-Sustainability Initiative (GESI), in their Smarter 2030 report, the digital world today, at this very moment, encompasses:

  • 34 billion pieces of equipment
  • More than 4 billion users

With the network infrastructures and data centers associated with these billions and billions, our digital world is responsible for 2.3% of global greenhouse gas (GHG) emissions. Data centers themselves account for 1% of the world’s electricity consumption and 0.5% of CO2 emissions. And science recognizes that significantly reducing GHG emissions is a mandate in order to slow and reverse the effects of climate change.

As a global collective, tech companies must work together to reduce energy consumption. That’s why we’re seeing more and more companies of all ilk announce their plans to cut GHG emissions—and many of them are turning to data center cooling with a fresh approach.

How to cool data centers

Although setting up a full data center cooling system might seem daunting task, it is a necessary step. All server rooms must have the adequate amount of cooling systems that the technology demands.

In the high-performance world we live in, failing to install a necessary water loop can cause serious issues; a lack of constantly cooler air throughout data centers leads to failures—so get everything installed at the start!

Many specialist organizations work with data centers to properly identify the necessary cooling solutions, install the technology, and manage the data center’s equipment from installation to EOL. Finding one in your local area is the best decision for an IT team that does not have experience in configuring cooling systems in data centers.

How cooling works: drawing heat out

In order to maintain optimal performance of the computing infrastructure, the data center must maintain an optimal room and server hardware temperature. The cooling system essentially draws heat from data center equipment and its surrounding environment. Cool air or fluids replace the heat to reduce the temperature of the hardware.

Data center cooling techniques

Data center cooling is a balancing act that requires the IT technicians responsible for it to consider a number of factors. Among many, some of the most common ways of controlling computer room air are:

  • Liquid cooling uses using water to cool the servers. Using a Computer Room Air Handler (CRAH) is a popular way to combine liquid cooling and air cooling, but new emerging technologies like Microsoft’s “boiling water cooling” are used for cooling data center servers and driving evaporative cooling technology.
  • Air cooling uses a variety of Computer Room Air Conditioner (CRAC) technology to create east paths for hot air to leave the IT space.
  • Raised floor platforms create a chilled space below the raised platform where a CRAH or CRAC can send the heat via chilled water coolers and other technologies which create cold aisles underneath the servers.
  • Temperature and humidity controls, such as an HVAC which controls the cooling infrastructure, and other technologies provide air conditioning functionality.
  • Control through hot and cold aisle containment allows hot aisles to feed onto cold aisles through the server room. Proper airflow, the use of a raised floor, and other cooling technology such as liquid cooling or HVAC cooling solutions are supported by hot and cold aisles within a data center.

Though necessary for business, these techniques require significant energy spend.

Using AI & neural networks

Significant improvements in cooling system technologies in the last decade have allowed organizations to improve efficiency, but the pace of improvements has slowed more recently. Instead of regularly reinvesting in new cooling technologies to pursue diminishing returns, however, you can now implement artificial intelligence (AI) to efficiently manage the cooling operations of its data center infrastructure.

Traditional engineering approaches struggle to keep pace with rapid business needs. What worked for you in terms of temperature control and energy consumption a decade ago is likely not enough today—and AI can help to accurately model these complex interdependencies.

How AI improves cooling

Google’s implementation of AI to address this challenge involves the use of neural networks, a methodology that exhibits cognitive behavior to identify patterns between complex input and output parameters.

For instance, a small change in ambient air temperature may require significant variations of cool airflow between server aisles, but the process may not satisfy safety and efficiency constraints among certain components.

This relationship may be largely unknown, unpredictable, and behave nonlinearly for any manual or human-supervised control system to identify and counteract effectively.

Organizations today can equip data centers data centers with IoT sensors that provide real-time information on various components, server workloads, power consumption, and ambient conditions. The neural network takes the instantaneous, average, total, or meta-variable values from these sensors to process with the neural network.

Google pioneered this approach in the 2010s, and today, more and more organizations are embracing AI to support necessary IT operations.

(Learn more about AIOps.)

Case study: Google data center

Google’s implementation of the neural network reduced the error to 0.004 Power Utilization Effectiveness (PUE) or 0.34-0.37 percent of the PUE value. The error percentage is expected to further reduce as the neural network processes new data sets and validates the results against the actual system behavior. These numbers translate into a 40% energy savings for the data center cooling system.

predicted-vs-actual-pue-values-at-major-dc

The graph below demonstrates how the neural network implementation delivered PUE improvements over the years. Since these results are aggregated from multiple data centers operating under different environmental and technical constraints, the optimal implementation of the machine learning algorithm promises even better improvements in comparison with traditional control system implementations.

The neural network algorithm is one of many methodologies that Internet companies including Google may have implemented for data center cooling applications.

Related reading

]]>
Bring Your Own Device (BYOD): Best Practices for the Workplace https://www.bmc.com/blogs/byod-policies/ Fri, 28 Jan 2022 00:00:08 +0000 http://www.bmc.com/blogs/?p=12413 Not so long ago, the Bring Your Own Device (BYOD) movement was largely contested across enterprise organizations. Proponents of the BYOD trend focused the debate on the productivity benefits of BYOD. Opponents uncompromisingly considered it as a liability. Both sides remained adamant until progressive organizations riding the wave of enterprise mobility took action, unleashing the […]]]>

Not so long ago, the Bring Your Own Device (BYOD) movement was largely contested across enterprise organizations. Proponents of the BYOD trend focused the debate on the productivity benefits of BYOD. Opponents uncompromisingly considered it as a liability.

Both sides remained adamant until progressive organizations riding the wave of enterprise mobility took action, unleashing the value that BYOD has to offer. These actions involved strategic best practices and layers of risk mitigation activities that ultimately enable BYOD devices—and their users—to:

  • Power workforce productivity
  • Yield profitability for the organization
  • Adhere to enterprise security needs

So, in this article, let’s look at why more and more people want BYOD. Then, we’ll consider the hard question: is BYOD really worth the security headache?

What is BYOD?

BYOD is a set of policies that allow employees to bring in and use their own personal devices. These devices can include laptops, smartphones, tablets, whatever is needed to complete the tasks they face.

The goal behind BYOD is straightforward: because workers use their personal devices to access company data and systems, employees should be more productive in the long run.

Today, of course, many companies are creating distributed workplaces, powered in part by more relaxed approaches to BYOD. When the pandemic hit, an employee might not have been able to easily work from their phone or tablet. Now a variety of software solutions support employees in the brave new world of remote work, making BYOD important in a way it had never been before.

BYOD practices thrive in Agile and DevOps-driven environments. Users should take advantage of well-integrated cloud solutions to facilitate collaboration, communication, and information access across otherwise siloed organizational departments.

Challenges of using personal devices for work

Although many find the idea of using a personal device quite attractive, BYOD initiatives need to be carefully considered. Employee satisfaction and overall enterprise agility needs to be carefully weighed against the risks and the necessary Mobile Device Management (MDM) that needs to happen before employees can download potentially sensitive data to their devices.

Other challenges of using personal devices for work include…

Poorly supported internally

BYOD falls somewhere between the business and IT functions, resulting in service desks that often fail to support the needs of the agile and mobile workforce. That’s because service desks were mostly built to service on-premises employees using employer-provided equipment.

This stark divide leads to many data and security concerns for businesses, particularly when they cannot meet the demand of their employees.

Shadow IT

Repeated requests, unfavorable governance, and slow request approval processes encourage the workforce to take matters into their own hands. Employees may adopt shadow IT—the grey area where users download or use software and apps that your organization hasn’t approved. The risk here? These shadow IT practices bypass your security mechanisms.

Lost property

The elephant in the room is the lost/stolen problem. Although it’s nice for your employees to complete work-related tasks with their own mobile devices, there is a serious risk of your employees losing the device and placing company data into the hands of possibly anyone, especially with cloud-native apps that make syncing and sharing data as easy as pressing a button.

For industries that need a high level of security on every single device, think about what you gain and what you have to trade. Are the benefits of BYOD solutions enough to warrant storing data on a device which might be secure? That’s a question which a lot companies answer differently.

Best Practices for the Workplace

Best Practices for BYOD in the Workplace

To address these challenges, organizations must invest in the right skillset and advancement in IT transformation to align service management capabilities with the BYOD needs of fast-paced DevOps-driven processes.

From a strategic perspective, the following policy best practices can empower organizations to achieve these goals:

1. Understand organizational requirements

Every organization differs in structure, culture, diversity, workforce preferences, IT policies, and regulatory compliance requirements. These differences are exacerbated due to your company’s:

  • Geographic location
  • Industry vertical
  • Size and age

As a result, every organization may have unique limitations on BYOD technology adoption, preferences, and requirements.

In DevOps environments, the organization must empower the service management function to develop protocols and procedures designed to facilitate their own unique BYOD requirements in the context of the challenges they face. This approach will ensure smooth BYOD adoption that leads to workforce productivity—without disrupting the behavior, compliance, and security posture of the organization.

2. Develop a flexible BYOD policy

It is practically impossible to satisfy every member of the workforce with BYOD policies. Regardless of the device, BYOD policies should encompass different user roles, privileges, and controls as part of your mobility strategy.

The most engaging enterprise mobility strategies that facilitate effective collaboration, information access, and strict adherence to security best practices do take a flexible, user-centric approach:

  • Establish simple, automated workflows that make it easier for internal customers to enroll their devices and request approvals for new apps and solutions.
  • Outline the security requirements with clear, simple, and easy-to-understand details.
  • Future-proof your BYOD strategies to address the upcoming needs of internal customers and the business landscape.
  • Respect end-user privacy by implementing the necessary protocols to segregate personal data from business information and apps on BYOD devices.

3. Track BYOD usage

BYOD employee-owned devices are common targets for adversaries, especially in the age of AI cyberattacks. Vulnerable personal devices with high-level user access and privileges can cause costly data leaks and potentially irreversible damage to the business.

With the enforcement of stringent data regulations like GDPR, organizations must balance workforce demands for BYOD against regulatory compliance and security threats. The security risk and implications of BYOD adoption have emerged as a top concern among business organizations, according to Verizon.

Managing corporate data through intelligent mobile device management is key to appeasing your employees who want to use their personal devices—without allowing said devices become an easy route for sensitive data leaks. Real-time security monitoring and detection become critical to ensure secure enterprise mobility practices with BYOD. IT needs to:

  • Track a range of metrics pertaining to network traffic and security
  • Understand how users and apps access corporate information
  • Restrict data consumption and information access based on organizational security and business policies through effective security measures

4. Educate the workforce

End users act as the first line of defense against cyber-attacks or the first loophole in BYOD security. Knowledgeable and security-aware professionals can help ward off a majority of cyber-attacks that initiate when downloading malicious apps, accessing rogue websites, or clicking links in phishing attempts.

Train and convince your workforce to comply with your organization’s security and BYOD policy in a few ways:

  • Educate employees on the security risks associated with Shadow IT practices.
  • Provide adequate reasons and pathways to avoid security malpractices.
  • Establish a culture of trust and loyalty among the workforce to reduce the possibility of employees going rogue against the organization.

When devices may become the agents of your downfall, the last point is especially important. If you are going to trust your employees to bring their own tablets, laptops, and phones to work with, you need to trust your employees generally too. The technology is a risk, but so are the people you trust to use them!

5. Empower IT with the right tools

Forward-thinking business organizations transform their IT to meet the mobile device and BYOD needs of today and tomorrow. Organizations need to understand their current working environment and clarify the desired future state of enterprise mobility.

BYOD policies should be designed to engage internal customers with the right processes, data, and technologies to transition between the current and desired future states.

Employ capabilities such as automated device enrollment and configuration and real-time troubleshooting to reduce service desk interactions.

  • Automate device enrollment and configuration as well as real-time troubleshooting in order to reduce the number of headaches that personally owned devices will give you service desk.
  • Adopt app vetting processes based on simple and automated workflows that make it convenient for ITSM to comply with app approval requests.
  • Invest in advanced Enterprise Mobility Management (EMM) that enables IT admins to facilitate the evolving and diverse BYOD needs of the agile workforce.
  • Implement multiple layers of security to protect BYOD devices; protect corporate data; facilitate effective communication and collaboration; and manage access controls and risks.
  • Include the tooling necessary for risk mitigation on devices and damage limitation in response to security infringements.

(Read more about enterprise mobility management.)

6. Expect a culture change

Finally, an effective BYOD policy should be designed to instigate a cultural shift toward secure and productive enterprise mobility practices. DevOps already brings best practices that facilitate strong interdepartmental collaboration, integrated business and IT operations, and automated workflows that streamline the adoption of new apps, technologies, and processes.

Design your BYOD policies to identify and eliminate the inhibitors to BYOD success, such as:

  • Isolated IT departments
  • Siloed business and IT operations
  • Slow and inadequate governance procedures
  • The unnecessary walled gardens that force employees to adopt shadow IT alternatives

Do I need to implement BYOD?

Mobile devices are everywhere, and they bring the amazing potential to do something great. Personal devices not only improve employee output and happiness, but they bring cost savings too.

For companies that want to enable employees to use their own mobile devices to connect to the work network, think about your security posture and failsafes. Although you have the technology to implement the policy, could your organization suffer data loss through lost or stolen devices? Do you have enough security responders to put out the fire if something compromises a laptop that isn’t secure?

If you’re working in an agile environment, employees often expect to bring their own devices. Think about the benefits that your employees will bring, but not before you sort out your security posture, MDM, and other issues that could be stoked by an employee using a personal device for work.

Related reading

]]>
What Is a Canonical Data Model? CDMs Explained https://www.bmc.com/blogs/canonical-data-model/ Tue, 30 Nov 2021 00:00:45 +0000 http://www.bmc.com/blogs/?p=12074 The companies succeeding in the age of big data are often ones that have improved their data integration and are going beyond simply collecting and mining data. These enterprises are integrating data from isolated silos to implement a useful data model into business intelligence that can: Drive vital decision making Improve internal processes Indicate service […]]]>

The companies succeeding in the age of big data are often ones that have improved their data integration and are going beyond simply collecting and mining data. These enterprises are integrating data from isolated silos to implement a useful data model into business intelligence that can:

  • Drive vital decision making
  • Improve internal processes
  • Indicate service improvement areas and opportunities

Data integration isn’t easy though, especially the larger your enterprise and the more software systems on which you rely. The hotch-potch of legacy systems and new tools make enterprise architectures difficult to manage, especially due to the different data formats that all these tools receive.

More and more, companies need to share data across all these systems. The problem is how difficult sharing data is when each system has different languages, requirements, and protocols. One solution is the canonical data model (CDM), effectively implementing middleware to translate and manage the data.

Defining a Canonical Data Model (CDM)

CDMs are a type of data model that aims to present data entities and relationships in the simplest possible form to integrate processes across various systems and databases. A CDM is also known as a common data model because that’s what we’re aiming for—a common language to manage data!

More often than not, the data exchanged across various systems rely on different languages, syntax, and protocols. The purpose of a CDM is to enable an enterprise to create and distribute a common definition of its entire data unit. This allows for smoother integration between systems, which can help:

Canonical Data Model vs Point-to-Point Mapping

How canonical data models work

Importantly, a canonical data model is not a merge of all data models. Instead, it is a new way to model data that is different from the connected systems. This model must be able to contain and translate the other types of data.

  1. When one system needs to send data to another system, it first translates its data into the standard syntax (a canonical format or a common format) that are not the same syntax or protocol of the other system.
  2. When the second system receives data from the first system, it translates that canonical format into its own data format.

By implementing this kind of data model, data is translated and “untranslated” by every system that an organization includes in its CDM. A CDM approach can and should include any technology the enterprise uses, including:

Benefits of Canonical Data Models

Benefits of employing a CDM

Enterprises that are able to successfully employ a CDM benefit from the following situations:

  • Perform fewer translations. Without a CDM, the more systems you have, the more data translations you must do. With a CDM in place, you cut down on the manual work that data integration requires, and you limit the chances of user error.
  • Improve translation maintenance. On an enterprise level, systems will inevitability be replaced by other systems, whether new versions or vendor SOAs that replace legacy systems. When just a single system changes, you only need to verify the translations to and from the CDM. If you’re not employing a CDM, you may spend significantly more time verifying translations to every other system.
  • Enhance logic maintenance. In a CDM, the logic is written within the canonical model, so there is no dependence on any other systems. Like translation maintenance, when you change out one system, you need only to verify the new system’s logic within the logic of the CDM, not with every other system that your new system may need to communicate with.

How to implement a canonical data model

In its most extreme form, a canon approach would mean having one person, customer, order, product, etc., with a set of IDs, attributes, and associations that the entire enterprise can agree upon.

By employing a CDM, you are taking a canonical approach in which every application translates its data into a single, common model that all other applications also understand. This standardization is good.

Everyone in the company, including non-technical staff, can see that the time it takes to translate data between systems in time better spent on other projects.

Building a CDM

You may be tempted to use an existing data model from a connecting system as the basis of your CDM. A single, central system such as your ERP may house all sorts of data—perhaps all of your data—so it seems like a decent starting point to the untrained eye.

Experts caution against this seeming shortcut. If the system that is the basis of your model ever changes, even to a newer version, you may be stuck using old data models and an outdated system, which negates the benefit of the flexibility that CDMs are designed for.

You will also face problems with licenses. Developers who try to handle various similar data models may also spend more time trying to decipher the differences, which can lead to more user errors.

If you’re opting for a canonical data model, create your model from scratch. Focus on flexibility so that you reap the purpose of the CDM: easy changes as your enterprise architecture necessarily changes. Otherwise, the convenience of a common data format will quickly become extremely inconvenient.

CDMs in reality

Getting a company to buy into the idea of a CDM can be difficult. Building a single data model that can accommodate multiple data protocols and languages requires an enterprise-wide approach that can take a lot of time and resources.

When to avoid a canonical data model

From an executive perspective, the time and money investment may be too significant to take on unless there is a real tangible change for the end-user–which may not be the case when building a CDM.Other critics of employing CDM argue that it’s a theoretical approach that doesn’t work when applied practically. A project as large as this is so time- and resource-consuming precisely because it is unwieldy.

The inflexibility of making every service fit within a specific data model means you may lose the best case uses for some systems. These systems may benefit from less strict specifications, not the one-size-fits-all goal of a canonical approach.

Why experts recommend CDMs

These experts recommend that an enterprise architect should instead approach the idea of a CDM differently: if you like the goal of data consistency, consider standardizing on formats and fragments of these data models, such as small XML or JSON pieces that help standardize small groupings of attributes.

Less centralization will allow for independent parts to determine what’s best: teams should decide to opt into a CDM approach, instead of a top-down decision where everyone is forced to create a canon data model.

Should my organization adopt a CDM?

CDMs may benefit your company depending on the size and needs of your data. If you can spend the time on such a project, the more systems and applications that need to share data, the more elusive a one-size canonical model can be.

Effectively implementing all your entities into one centralized model and creating a common data format that communicates across all systems will speed up your enterprise’s data handling capabilities. Then, taking data from disparate systems and managing them in a central location makes implementing data into business decisions is more efficient and more effective.

Related reading

]]>
What’s CD4ML? Continuous Delivery with Machine Learning Explained https://www.bmc.com/blogs/cd4ml-continuous-delivery-machine-learning/ Mon, 22 Nov 2021 15:16:05 +0000 https://www.bmc.com/blogs/?p=51200 Applying data science to your continuous delivery (CD) model is key to gain real world outcomes. In a world where CI/CD is king, the growing demands for continuous rollouts of software need to incorporate machine learning and data-driven principles. Enterprises need to move beyond DevOps principles which are supported by data science… Instead, we are […]]]>

Applying data science to your continuous delivery (CD) model is key to gain real world outcomes.

In a world where CI/CD is king, the growing demands for continuous rollouts of software need to incorporate machine learning and data-driven principles. Enterprises need to move beyond DevOps principles which are supported by data science…

Instead, we are looking at Machine Learning Operations (MLOps).

Continuous integration and continuous delivery need to be supported by machine learning models in a new type of life cycle. By integrating creating an environment that combines CI/CD and machine learning—known as CD4ML—every step that your business takes is powered by data. When you know where you want to go, data builds the path to get there. When you need to understand where you are, data informs you how to strengthen your position.

But it’s not an easy process. Integrating data, data scientists, and data model training comes with plenty of challenges, mainly relating to cost and the undefined amount of time it can take the get a working data model. Can ML models improve your continuous delivery business needs? Let’s find out.

How continuous delivery works with machine learning

Continuous delivery with machine learning requires a different business workflow. Although we are all used to DevOps processes, the practices are at risk of becoming inefficient if data cannot be properly integrated. A DevOps team with a data engineer who consults a model does not make a CD4ML team.

What is CD4ML?

Continuous Development for Machine Learning is a new software engineering approach. The principles and practices of CD4ML are based on machine learning and data models to inform traditional software development processes, training data to make the deployment pipeline both more agile and more user focused.

Continuous development and continuous integration feeds off data taken from the teams responsible for data collection and management. New data becomes the central concern for machine learning engineers and software development professionals alike.

Structure of CD4ML

CD4ML needs four key stages in order to work:

  1. Data scientists collect, analyze, and present hypotheses to the development team.
  2. The DevOps team integrates the findings into existing applications and services.
  3. Data engineers continuous analyze and update the data in order to optimize the data model that is being used.
  4. Business representatives present outcomes to the data scientists who then collect and explore new data to help fit the desired software development needs.

This process does not happen as a linear pipeline. This is a constant, cyclical process that needs business representatives to feed outcomes back to the scientists exploring the data. This completes the cycle and allows for continuous development and continuous deployment with the desired and necessary features.

Intelligent Enterprise Series

Each stage of the CD4ML cycle refers to data and business outcomes to acquire real world outcomes. (Source: ThoughtWorks)

Comparing CD4ML vs DevOps

Within a machine learning informed production environment, the software development process is driven by data sets and ML engineers. Data engineering informs data models in production, allowing for development cycles to constantly refer to data scientist-driven information.

Pipelines in an MLOps team work by constantly feeding new information to the next stage. This allows for model development, model monitoring, informed development, and recalibrating goals and desired outcomes that underpin the ML system.

Is CD4ML a better approach than DevOps?

DevOps is widely adopted, but now is the time to move your production teams into the next paradigm. When implemented correctly, embedding data analysis and implementation into your software development workflow allows for:

Of course, it is not all positive. In fact, CD4ML is generally prohibitively expensive for SMBs due to hiring costs and the general upkeep of employing an entire data team to integrate with your DevOps pipeline. Is it worth it? If you’re a small-sized operation, you may think not.

CD4ML benefits & drawbacks

Benefits of CD4ML Drawbacks of CD4ML
User data is analyzed and implemented thanks to data scientists building models that explain user intentions and wants. The cost of a data science team seriously limits the number of businesses that are able to hire enough data engineers and scientists to implement CD4ML
A cross-functional team brings more skillsets to the development process, refining and improving the end product with many expert perspectives. Initial development is no faster as the data team can only work when there is actual data to use
Better version control is achieved through incrementally adding small pieces of code to existing software based on data-driven insights. Building effective data models is a slow process, meaning that a business who wants to implement CD4ML will not see results until the data team has effectively trained the model
Shorter development cycles are measured in days due to the data science team feeding smaller-sized packages into the development process, speeding up production and allowing for better version control

How does CD4ML improve business?

Depending on the size of your business, Continuous Delivery for Machine Learning allows you to integrate and use specific data to improve your software development process. This allows for better software rollouts as the continuous delivery and deployment of software is defined by the data that you collect from your users.

Implementing CD4ML in your business’s workflow creates a better defined approach to updates and improvements. Data around your users needs to be analyzed and carry considered before it can properly be used for improving development. That’s why you need both data scientists and data engineers:

  • Data scientists clean and prepare your data before the developers can implement changes.
  • Data engineers keep that data up to date and relevant to your business needs.

Although DevOps approaches are incredibly popular throughout the software world, changing to an MLOps approach will allow your business to take the next step. Your organization builds data into the development process, speeding up production and pin-pointing exact solutions to business problems as they are raised.

If you can afford to expand your process to include expansive data analysis teams, why wouldn’t you integrate it into your DevOps pipeline?

Related reading

]]>
What’s BPEL? Business Process Execution Language Explained https://www.bmc.com/blogs/bpel-business-process-execution-language/ Tue, 05 Oct 2021 10:09:49 +0000 https://www.bmc.com/blogs/?p=50768 In any given business, there are processes which need to be repeated. Although there may be a small number of business processes that don’t use computer logic or web services, the reality is that most modern business practices take place virtually. So how do business leaders ensure that processes are defined and followed? Enter stage: […]]]>

In any given business, there are processes which need to be repeated. Although there may be a small number of business processes that don’t use computer logic or web services, the reality is that most modern business practices take place virtually.

So how do business leaders ensure that processes are defined and followed? Enter stage: BPEL.

Developed and approved by OASIS (a technical committee containing business and technology giants such as Dell, IBM, and Microsoft), BPEL is designed to provide businesses with a way to automate and orchestrate business processes—sending emails, completing transactions, processing data, and many other different procedures, including the ability to correct errors found in other processes.

So, let’s take a look.

What is BPEL?

The name says it all: business process execution language.

BPEL is a language that makes these business processes programmable and easy to implement. Being able to define your processes, orchestrate and choreograph processes, and integrate web services into the average business workflow makes integrating BPEL a jump forward for teams aiming to improve the automated aspect of their organization.

The birth of BPEL

Developed in 2001 by IBM and Microsoft, Business Process Execution Language was designed to combine two now defunct XML languages: IBM’s WSFL and Microsoft’s Xlang. These languages were used for programming in the large and served similar purposes in the business context.

Although Microsoft initially intended to carry out this process on their own with the development of XLANG/s, the two companies joined forces to create an XML language which would take the market by storm.

Business Process Execution Language version 1.0/1.1

The two companies developed and named the resulting programming language “BPEL4WS”—short for Business Process Execution Language for Web Services. This name would be later abandoned, but the initial BPEL4WS was a big step in the direction of creating a programming in the large language that would dominate the market.

Business Process Execution Language 2.0

Throughout the years, additional teams have contributed to the Business Process Execution Language project, including SAP and Adobe Systems.

The most recent version of BPEL (WS-BPEL 2.0) was released in 2007. SAP would go on to create the Business Process Execution Language for People (BPEL4People) to explain the benefits BPEL and web services can be combined with human interactions.

Components of BPEL

The ease of integrating BPEL within your organization allows for enterprise-scale automation of services, including external process through Web services.

When originally designed, the team behind BPEL intend to fulfill 10 design goals:

  • Define business processes through web services to help them interact with external entities
  • Use an XML-based language to fulfill all business needs
  • Build web services orchestration capabilities into the language as abstract and executable business processes
  • Regimes that are both hierarchal and graph-like to reduce fragmentation
  • Data manipulation and analysis functionality is built into the language
  • Supports an identification mechanism
  • Creation and termination of processes in the basic lifecycle mechanism
  • Define a transaction model that includes techniques such as compensation actions and scoping
  • Web services is the underlying model
  • All functionality builds on and adds to the web services standard
BPEL workflow

BPEL example workflow (Source)

Because of these 10 aims, BPEL (particularly the 2.0 version) was designed to be an orchestration language—not a choreography language like Xlang and the initial BPEL4WS creation.

The additional benefits that BPEL brings to a company through these 10 aims include:

  • Large scale automation
  • Orchestration of automated processes
  • Easy to integrate web services

(Compare automation & orchestration.)

BPEL and orchestration

BPEL orchestrates processes, resulting in three distinct benefits for businesses:

  • Data and services are easier to access
  • The business becomes more agile
  • Inexpensive to integrate new services is inexpensive

When your organization can access many internal and external services, BPEL can take all the data and make it readily available to you from a centralized location.

Using external web services actually adds to the overall automation process—web services handle both abstract business and executable business processes, allowing your staff to spend more human hours on what you need them to.

Similarly, new aspects of the infrastructure can be immediately integrated with BPEL and web services, allowing large scale improvements to business process.

Whenever new applications or infrastructure is added, they can be immediately integrated with new process descriptions. Large scale changes cause problems for many organizations, but the BPEL process management is there to make the process as painless as possible.

Orchestration in action

Orchestration in action (Source)

BPEL use cases

Suggested use cases for WS-BPEL/Business Process Execution Language of other versions can be found on the OASIS Open website.

BPEL & Salesforce

When Salesforce needs to interact with the head CRM application, BPEL can be used to automatically communicate between the BPEL powered execution environment (or the engine and the platform) and allow for proper application integration.

BPEL & data manipulation

BPEL is perfect for data collection and manipulation. The use case provided by OASIS allows for a number of functions, including:

  • Variable initialization
  • Assigning variables by copying another variable or other variables
  • Creating and working with sample XPath calculations
  • Working with array structures

Integrating BPEL in your organization

Businesses live and die on their ability to repeat mission critical tasks. When these tasks can become more agile and performed without expensive integrations, a business can look forward to developing their process.

When you can take every business process in your organization and make it automatic, there are increases for business efficiency (even in the face of human behavior, thanks to SAP’s work with BPEL4People).

Integrating BPEL is perfect for businesses that have a wide range of process descriptions that can make the leap to the world of automation. With BPEL, business process management is straightforward.

Related reading

]]>
IT Ticketing Software: An Introduction https://www.bmc.com/blogs/it-ticketing-systems/ Fri, 17 Sep 2021 00:00:55 +0000 https://www.bmc.com/blogs/?p=16499 How do you handle your office’s need for IT support? Do you have an internal team you email that rushes to your side to help or handles plenty of processes for you? Or does your IT department answer the phone like Roy from The IT Crowd? If you feel like Roy, maybe it’s time for […]]]>

How do you handle your office’s need for IT support? Do you have an internal team you email that rushes to your side to help or handles plenty of processes for you?

Or does your IT department answer the phone like Roy from The IT Crowd?

Roy from The IT Crowd

“Hello, IT. Have you tried turning it off and on again?”

If you feel like Roy, maybe it’s time for automated responses. It’s time for a ticketing system.

What is IT ticketing software?

IT ticketing software collects incoming support requests from multiple channels. The software stores and manages all types of work for all types of departments: IT support, software and product development, HR, legal, financial, and more.

Ticketing systems work by collecting data:

  • Data that tracks communication between the requestor and the service provider
  • Data for reporting and analytics

The IT ticketing system acts as a single point of contact between the problem-finder and the problem-solver. Almost all IT ticketing software have these common capabilities:

  • A central archive for service requests (on-site or through cloud services)
  • Creation of digital tickets via email, through online forms for the consumer
  • Automated responses tied to pre-determined actions or workflows
  • Classification and organization of tickets
  • Communication tracking between the requestor and the service provider
  • Data collection for reporting and analytics

These tools elevate workers to higher-level problem-solving. Leave the little stuff to the automated systems. Pay your employees to solve complex problems (or deal with complex clients).

(Read more about help desk automation.)

Why companies need IT ticketing software

Automating tedious jobs—which is what ticketing software can help do—frees up your talent to focus on what they are good at.

Let’s look at IT employees. If IT support workers are so busy answering the little problems, they miss the big ones: a serious hardware failure, a malicious piece of malware, anything that could derail business operations.

With ticketing systems, the brain goes into auto-pilot on little tasks, so you can better focus on the larger goal.

Why employees need IT ticketing software

IT ticketing software automates the rote answers by directing the requestor to FAQ resolutions. if it’s a complex scenario or a customer needs a personal touch, the IT ticketing system can escalate the problem to a human operator.

The IT team can now focus on the difficult stuff instead of wondering whether the caller has tried turning it off and on again.

Benefits of IT ticketing software

  • IT departments can reduce the overall cost of responding to incidents
  • There are fewer errors because the IT team isn’t under pressure to reset passwords or plug in speakers when a serious problem comes in
  • Delays are cut, especially for minor issues
  • Events escalate less because fewer issues need manual intervention
  • The IT department is satisfied because their job is more than just dealing with repetitive tasks and forgotten passwords

A happy team working in tandem with automated systems. Together, they can ensure that business operations are smooth and that your clients are happy.

(Explore how AI can augment human work.)

Streamlining IT support in your organization

IT ticketing software will streamline your IT support by collecting and managing the process from end to end. When the individual sends in a service request, the software sends an automated message to the requester letting them know it has been received.

The ticket is created, prioritized, and assigned to the individual who can resolve the issue. Throughout this process, the software collects data that supplies the respondent with additional tools. In turn, IT teams can focus on automating new processes if they become daily issues.

Adapt Create Prioritize Assign Resolve

This frees up the time needed to sort through tickets that someone else is more skilled to answer as well as providing previous solutions to the problem. The respondent can analyze the ticket, suggest appropriate fixes, and resolve the issue promptly.

From beginning to end, the software documents the process and can even alert the requestor to the status of the resolution. IT ticketing software typically comes as a stand-alone tool or as part of a larger ITSM platform.

What is an ITSM platform?

Short for IT service management, ITSM tools and platforms are a collection of services that supports your IT department. ITSM is responsible for the implementation and management of IT support that improves your overall business operations and customer satisfaction.

An IT ticket system is only one part of an ITSM platform but provides plenty of immediate value to an organization.

An ITSM also helps you communicate clearly with your clients. When incidents occur (whether routine or emergency), these platforms keep the lines of communication open and inform your customers and stakeholders of the company’s current affairs.

(Learn how to choose ITSM tools.)

Are ITSM platforms necessary?

Incidents happen. Whether it is an unplanned interruption to a service, a reduction in the quality of service, or an event that will influence service, the goal is to minimize the negative impact on the customer.

Significant communication and collaboration are necessary for successfully handling the situation. Putting an ITSM platform in place long before the incident ever occurs is a smart and prudent decision.

BMC is a service management leader

BMC is a recognized industry leader for ITSM solutions. And our BMC Helix portfolio, built for the cloud, is the first and only intelligent enhanced end-to-end service and operations platform.

Related reading

]]>
Container Sprawl: What It Is & How To Avoid It https://www.bmc.com/blogs/container-sprawl/ Wed, 11 Aug 2021 13:14:18 +0000 https://www.bmc.com/blogs/?p=50408 “Container sprawl is the new virtual machine sprawl” is quickly starting to sound like a cliche. But it’s difficult to disagree. Inappropriate management of the number of containers used leads to: High costs Security concerns Potentially, a lack of governance As more organizations are expanding their cloud presence, there is a growing need for containers. […]]]>

“Container sprawl is the new virtual machine sprawl” is quickly starting to sound like a cliche. But it’s difficult to disagree. Inappropriate management of the number of containers used leads to:

  • High costs
  • Security concerns
  • Potentially, a lack of governance

As more organizations are expanding their cloud presence, there is a growing need for containers. Effective container management allows your employees to quickly access applications on a variety of operating systems. And it’s big business: container providers are expecting growth to nearly $500 billion by 2023. Organizations like Kubernetes and Docker are leading the cloud-based push—but do we truly understand how to effectively manage containers yet?

The main issue for these companies spending a significant part of their budget on containers is two-fold:

  • Avoiding sprawl
  • Ensuring that proper management can stop excessive spending and employee wastefully spinning up containers

So, in this article, let’s take a look at container sprawl. We’ll start with a brief recap of containers and their benefits.

How containers work

A container is a piece of software that contains an application and all associated code in one place. This allows the container and its associated application to be “lifted out” of its native OS and placed somewhere new.

Containers offer a smaller-scale type of virtualization, but they’re different from virtual machines (VMs) While using a virtual machine creates a new “machine” on the hardware, containers only create applications in the OS.

(Read our full explainer on VMs vs containers.)

VM's Vs Containers

Why use containers?

Container use is incredibly convenient, especially when you need resources in a distributed workplace. Here are three of the top reasons for using containers in your operations:

  • Compatibility issues. If a company is using an older version of a programming language but knows that a newer version will run it, the development team can run it in a virtual environment without having to install/uninstall the necessary content.
  • Cost-cutting. Instead of splashing out on any physical server or end-point technologies, applications can be packaged to be used anywhere.
  • Security. Securing your applications into one place isolates any vulnerable aspects of necessary services. Instead of these applications exposing

Growth of cloud-native apps & containers

Cloud-native applications (that is, apps entirely hosted and operated in the cloud) are becoming more popular, especially during the global pandemic. Among the greatest benefactors are your IT organization’s DevOps teams: with containers, your developers can quickly build full stacks to test new code in a faux infrastructure that mimics the organization.

The best part? All this can happen from anywhere.

Cloud-native apps are logical for deploying applications to a distributed working environment. Containers are also allowing businesses to host their own full-stack development platforms from anywhere in the world. Physical access to servers becomes less and less a problem.

Now, let’s turn to the problem with too many containers: sprawl.

What is container sprawl?

Container sprawl is the tendency to run up an unnecessary number of containers. Although there are differences between running a cloud-native container system and a physical data center, the biggest problem is effectively the same:

Cost.

Spinning up a lot of containers causes business problems, typically in cloud computing fees and management issues that result in inefficiencies. To be clear: even though creating containers is significantly more convenient than setting up new physical servers, the cost implications can quickly grow out of control.

Container sprawl vs VM sprawl

The convenience of opening new images and running containers means that we are reaching a critical level. Kubernetes, in particular, is guilty of creating a new image and a new container with each change made. Unless you’ve got the team skills and effective cluster management tools, however, operations quickly spiral out of control.

Because a variety of environments can be quickly spun up without affecting the overall performance of a network, a lack of central governance is increasingly problematic. Managing clusters in a virtual workplace is difficult because resources are often mismanaged.

But the main takeaway is that container sprawl is much easier to manage. Sensible VM policies can stop your team from indulging in container sprawl too.

(Understand how containers & Kubernetes work together.)

How to manage containers

But security operations and management teams don’t have to think about abandoning what’s useful about containers. Instead, the use of clusters will cut down on management complexity and sprawl that is potentially harmful to the organization’s security. And the right management approach will also help cut costs.

Here are three suggestions for improving your team’s chances in the fight against container sprawl.

Combine applications

Sometimes, applications should be combined. Creating clusters of what you might turn into three or four containers results in:

  • Fewer individual images being loaded
  • Overall decreased costs

Overzealous container fans may want to turn every single process into a container—but beware: combining containers can actually improve functionality and save costs at the same time. Central governance and change management of containers will lead to lower costs thanks to combined container functionality.

You don’t need to place every single application in its own image if you only ever use them in conjunction with other ones!

Combine VMs & containers

At the risk of inviting VM sprawl, using virtual machines to create containers can cut down on the overall costs of out-of-control container creation.

Host a VM in the cloud and then add a container image to that machine. For example, hosting Kubernetes or Docker in an infrastructure as a service (IaaS), such as GCP or AWS, can lead to lower costs than simply subscribing to a container service. You may sacrifice pure performance, but the cost-benefit analysis may pay off.

Abandon servers

Abandon servers altogether?! Slow down! We’re not abandoning them entirely. In fact, we’re not abandoning anything—this is just moving more applications and components to the cloud.

The main benefit of this approach is slowing the costs of using containers. If your team only uses an app once a month, you will only be charged for that time. Moving infrequently used applications and processes into the cloud on a cost-per-use basis will slow the financial bite of container sprawl.

Using containers without inviting container sprawl

Containers are becoming more necessary in day-to-day operations, so managing container sprawl is now, unfortunately, a necessary evil for IT management. Implementing effective container and virtual machine policies is the best way to make the most of the functionality that containers bring without letting costs spin out of control.

Think about the suggestions above and how they could aid your company. Is it time to start creating containers or is it time to move all applications into the cloud? Either way, avoiding container sprawl is key for effective governance, slowing expenditure, and effectively managing this incredibly useful functionality.

Related reading

]]>
How Middleware Works https://www.bmc.com/blogs/middleware/ Fri, 06 Aug 2021 12:41:15 +0000 https://www.bmc.com/blogs/?p=50357 Middleware first gained popularity in the 1980s. Deploying middleware then allowed new applications and services to access older legacy back-end systems. This allowed developers to ensure that new interfaces are capable of receiving data from older back-end technology. Sometimes referred to as “software glue”, middleware continues to be used today, as software that mediates between […]]]>

Middleware first gained popularity in the 1980s. Deploying middleware then allowed new applications and services to access older legacy back-end systems. This allowed developers to ensure that new interfaces are capable of receiving data from older back-end technology.

Sometimes referred to as “software glue”, middleware continues to be used today, as software that mediates between two separate pieces of software. It underpins the architecture philosophy on the backend and allows an operating system to communicate with applications.

Today’s middleware is less about connecting to legacy systems and more about overall access and communication. Let’s take a look.

What is middleware?

Middleware is software that acts as an intermediate between the backend and the front end. It is a runner between the two platforms that allow users to access otherwise inaccessible functionality. For example:

  • Middleware may run between a Windows machine and a Linux back-end.
  • Middleware could be the piece that allows enterprise employees to use remote applications.

Middleware can be easily understood as the “to” in phrases like peer-to-peer—it is middleware software that enables the data to travel.

Middleware is also important for application development and delivery. By understand the operating systems that a company uses to underpin its operations, a software developer can account for the way in which an application will be distributed.

How middleware works

Allowing applications to communicate through middleware means greater longevity of the operating system architecture, thanks to integration middleware aiding with communication.

Various types of middleware are necessary for performing the functions we expect of the applications that we use in business and personal settings. In fact, they are far more common than most people realize.

For example, Android users rely on middleware every time they use any phone application. The Android operating system is built on a modified Linux kernel, so developers need to build the application with the need to communicate with Linux in mind.

Linux needs to communicate with an app, but it needs to use the Android OS as middleware in order to successfully do so. As you can see, the request from the application needs to communicate with the back end. In order to do that, the Android operating system will:

  1. Send the request back.
  2. Receive the data in response.
  3. Transform the data for the application.

Where is middleware useful?

Everywhere! Middleware is useful absolutely everywhere, but especially in enterprise settings.

Middleware and middleware developers support application development by allowing back-ends to become front-end agnostic, in some cases. Middleware enables operating systems of all kinds to be linked up to all kinds of front-end applications, including everything from web servers to database middleware.

Middleware Types

Types of middleware

Middleware is not one type of software. Middleware comes in many forms and each type has a different use case which is useful to improving productivity and accessibility to apps.

In fact, there are many broad types of middleware that each serve different purposes. Depending on a business’s needs, they may only need a type of middleware that will lock the client machine when it is accessing remote programs. Other enterprises might need to concurrently use both local and remote functionality, meaning that they need a different type.

Here are some examples of middleware, including examples you will have encountered in your day-to-day life.

Application programming interface (API)

An API is middleware software that communicates information between a front end and a back end. Technically, APIs aren’t middleware, but they serve a similar purpose.

A common area that people will find APIs is in headless platforms—the back-end only communicates with the APIs, which then run the data to the front-end. This allows web services to be completely customizable, instead of being locked into a rigid, monolithic provision.

(Learn how to build API portals devs will love.)

Remote procedure call (RPC)

When working with distributed computing setups, an enterprise application might rely on a specific kind of middleware called Remote Procedure Call (RPC).

RPC in Action

RPC in action (Source)

When a computer program starts a procedure on a remote machine, it is beginning a client-to-server process. In order to do this, the local computer starts to use middleware to allow the server to perform the procedure.

This allows an end-user to perform remote produce as if it was local and to make the most of distributed systems and any enterprise application that the user needs. Without RPC, it would be difficult to run thin-client machines.

Message-oriented middleware (MOM)

MOM is similar to RPC in that it allows users to take advantage of distributed systems. Intended for applications that span multiple operating systems, it takes the messages that are outputted by software components such as applications and allows for platform-agnostic communication.

Whereas RPC needs the called procedure to be returned in order for the client to begin working again, MOM allows for loose coupling of components. A message broker (or the middleware) provides translation services back to the end-user without the need to lock the systems together. Through messaging, the middleware platform translates information back to the end-user.

Middleware is necessary

Whether you are working in cloud computing or in another area that requires distributed applications, middleware software provides a range of tools to developers for creating application servers and other tools that are useful in an enterprise environment.

Applications can work together with a range of back-end technology to ensure that having a diverse computer environment does not hold back an organization’s ability to access enterprise applications.

In fact, middleware provides a wide range of uses to any business.

Related reading

]]>
Low Code vs No Code Explained https://www.bmc.com/blogs/low-code-vs-no-code/ Fri, 30 Jul 2021 00:00:00 +0000 https://www.bmc.com/blogs/?p=13147 The biggest change since cloud-based software. Low-code and no-code platforms are bringing about the biggest change in the IT world since Google Docs. Cloud-based software forced the rest of the IT world to sit up and listen. Traditional providers were forced to change overnight. And now the world of DevOps is being forced to listen […]]]>

The biggest change since cloud-based software.

Low-code and no-code platforms are bringing about the biggest change in the IT world since Google Docs. Cloud-based software forced the rest of the IT world to sit up and listen. Traditional providers were forced to change overnight.

And now the world of DevOps is being forced to listen to the new kids on the block—low-code and no-code platforms, which have the potential to make software development so easy that almost anyone can do it.

Do you need to build applications for immediate use? Maybe it’s time to move away from professional developers and pick up a new low-/no-code platform.

What is “low code”?

Low-code programming removes up to 90% of the coding process. These low-code development platforms cut down on development by using innovative drag-and-drop tools that:

  • Streamline the process
  • Save on turnaround time

Specialized language knowledge becomes less of a problem because all the groundwork is already written. Application development is faster because specialist coders with narrow skills are not needed.

When you need to develop and release lots of applications, low code helps minimize opportunity for bottlenecks in the workflow. In-demand skills are no longer stretched in every direction: the low-code platform handles that for you.

Because the demand for mobile app development services is outgrowing what the industry can deliver, low-code platforms make it easier for non-developers to create apps. The definition of a developer broadens. Now, non-specialists using low-code platforms can answer the call for more apps—without support from specialists.

Examples of low-code platforms

Two of the most successful low-code platforms are Salesforce and Zoho. Both platforms allow users to create their own applications based on a wide range of frameworks, such as through Salesforce’s low-code development platform AppExchange.

By providing the majority of the code for their clients, both Salesforce and Zoho allow clients to customize what they show their customers and end users. Rapid application development is easy because the low-code requirements allow a content management system or customer relations program to be rolled out almost instantly.

What is “no-code?”

Defining no-code is somewhat problematic when it comes to the question of low-code vs no-code. The Gartner Magic Quadrant Low-Code Application Platforms 2020 report grouped low-code and no-code together as one. Although it is a distinct style of application development, no-code is still seen as a subsection of low-code.

No-code development platforms are growing in popularity because they allow non-technical individuals to create apps and other tools. There is no need to code anything in the application.

Instead, they use simple, intuitive interfaces, generally with drag-and-drop functionality. This makes application development an agile process there is no need to wait for a developer to create even the last 10% of an application.

The real benefit of no-code platforms is that application developers can quickly respond to business needs. Business process management no longer needs traditional programming experts to build apps or tools.

Examples of no-code platforms

If you have used Pipedrive for your CRM needs, Airtable for cloud workspaces, or Canva for graphic design, you will understand the benefits of no-code solutions. They have taken difficult areas that generally require a high level of coding skill and made them available to users.

Canva is a particularly interesting case. Its popularity boom has led to the no-code platform being valued at $3.2 billion. The market for no-code solutions is expanding and professional developers may have to worry about their job security.

(See how BMC harnesses no code platforms.)

Difference Of Low Code No Code

How low/no-code is changing the industry

It’s impossible to deny that no-code and low-code tools are having a significant impact on the market. As knowledge about low-code/no-code platforms spreads, so is the expected market share and the total worth of these innovative companies.

By 2024, Gartner expects that 65% of all app development will originate with low-/no-code tools. Business needs will have become too great to wait for people fluent in programming languages to build everything from the ground up. Instead, business users will take their needs into their own hands.

Forrester agrees with Gartner, estimating that low-code and no-code resources will be worth up to $21 billion in 2022. This incredible growth is due for several reasons, as noted above, but the most important benefit is speed.

Enterprises want to have their applications developed quickly—but are they minimizing the problems with this approach?

Who has to change?

The group that may have to change the most is those in classical DevOps positions. Although the demand for traditional coders will not disappear overnight, there is the potential for fewer positions to be filled.

This may address the gap in the number of developer jobs that go unfilled throughout the year, but does it spell disaster for specialized workers?

Possibly, but it’s not likely. Whereas specialized coders working in rare languages or one particular area of framework development might become less “necessary” to a business, there are still going to be jobs available. But the rise of citizen developers is a concern for many in the industry.

Are low/no-code platforms flawless?

In short, no. These platforms solve one problem, but they may expose you to additional risk.

Problems with low-code

Many low-code platforms are difficult to master. Although creating an app on someone else’s chassis is easier in the short-term, long-term scalability of high-quality applications may be out of reach.

These platforms may struggle to balance both high performance and creation ease. If it’s so easy, can it handle complexity? That’s why low-code apps may only be short-term solutions for business needs.

Problems with no-code

No-code causes more problems than low-code for two reasons:

  • The risk of shadow IT
  • General technical debt

Because no part of the platform has been made by a developer, a non-specialist may make mistakes that developers wouldn’t. From a technical point of view, having your entire code base built by someone else could lead to vulnerabilities or inefficiencies that cause difficulties later on.

This leads to a great deal of technical debt—issues you’ll eventually need to deal with—that could cause an enterprise to have slow or ineffective apps. For example, the coding that makes up the basis of the application is full of unnecessary or irrelevant filler.

Of course, no-code options make it so easy to do what you want quickly, members of your organization may experiment with unsanctioned or untested apps—shadow IT—that might not meet your organization’s compliance and security needs.

Is low-code/no-code the future?

Low-code and no-code platforms absolutely have a place in the world of development. Gartner is expecting them to start to dominate the market within only a few years, so ignoring the benefits that these approaches bring could mean missed opportunities for an enterprise.

But low-code and no-code aren’t going to erase the need for traditional coders just yet. Relying on others for all your application development needs can be a risk, no matter how useful the platforms are.

DevOps processes still have a place in the workplace, even if low-code solutions may have a majority share of the market in the years to come.

Related reading

]]>
IT Security vs IT Compliance: What’s The Difference? https://www.bmc.com/blogs/it-security-vs-it-compliance-whats-the-difference/ Fri, 25 Jun 2021 08:00:26 +0000 http://www.bmc.com/blogs/?p=11030 The line between security and compliance is easily blurred. Sometimes they feel like a moving target. Maybe you’ve asked yourself one of these burning questions: How do we create comprehensive security programs while meeting compliance obligations? Is checking the compliance box really enough? How does all this enable the business to function and move forward? […]]]>

The line between security and compliance is easily blurred. Sometimes they feel like a moving target. Maybe you’ve asked yourself one of these burning questions:

  • How do we create comprehensive security programs while meeting compliance obligations?
  • Is checking the compliance box really enough?
  • How does all this enable the business to function and move forward?

These questions shape the direction of an organization and ultimately cause it to succeed or fail.

So, in this article, let’s clarify the differences between IT security and IT compliance.

Security and Compliance Guide

What is IT security?

Security officers follow industry best practices to secure IT systems, especially at the organizational or enterprise level. Security pros are constantly looking at how to both:

  • Prevent attackers from harming the company IT infrastructure and business data
  • Mitigate the amount of damage that is done when an attack is successful

In the past, administrators would take a purely technical approach and rely heavily on systems and tools to protect their network. Today, though, things have changed.

Due to increased specialization and technical know-how, IT security is not limited to a single field or discipline. Instead, there are multiple areas such as architecture and infrastructure management, cybersecurity, testing, and especially information security—arguably the most critical policy for any organization.

Information security (InfoSec) is exercising due diligence and due care to protect the confidentiality, integrity, and availability of critical business assets, something security pros know as the CIA Triad. Any IT security program must take a holistic view of an organization’s security needs and implement the proper physical, technical, and administrative controls to meet those objectives.

Taking the three key functions of confidentiality, integrity, and availability, organizations can implement effective InfoSec protocols. But what does CIA actually mean?

Here are three key sections in understanding how InfoSec must be managed.

  • Confidentiality. Company information can be sensitive information—customer data, proprietary information, innovations in the works. It is the duty of IT security to protect this information. Ensuring that only the correct and authorized user(s) and system(s) can read, change, and use data is key.
  • Integrity. Information and the system it is contained in must be correct. Having integrity means knowing that what is stored is correct and the system has measures to ensure that.
  • Accessibility. Systems and information need to be available when they are needed. If a system isn’t available, it can’t be relied on.

IT Security Policy Critical Components

Two additional properties, authentication and non-repudiation, are also vital to IT security.

(Learn more in our IT security policy explainer.)

How IT security looks today

Traditionally, security professionals would rely on devices like firewalls and content filters along with network segmentation and restricted access. But as modern threat agents became more and more sophisticated, the tools that security analysts and officers have to use become more complex too.

Old-school technical controls cannot account for:

Today, security professionals need to have a fuller kit of tools to battle against malicious outside threats.

The concept of IT Security comes down to employing certain measures to have the best possible protection for an organization’s assets. At the heart of all good IT security protocols is the CIA triad.

(Explore the roles of Chief Information Security Officer and the security team.)

What is IT compliance?

IT compliance is the process of meeting a third party’s requirements with the aim of enabling business operations in a particular market or aligning with laws or even with a particular customer.

Compliance sometimes overlaps with security—but the motive behind compliance is different. It is centered around the requirements of a third party, such as:

  • Industry regulations
  • Government policies
  • Security frameworks
  • Client/customer contractual terms

Let’s say that IT security is a carrot. it motivates the company to protect itself because it is good for the company. IT Compliance, then, is the stick—failure to effectively follow compliance regulation can have serious effects on your business.

Often, these external rules ensure that a given organization can deal with complex needs. Sometimes, compliance requires an organization to go beyond what might be considered reasonably necessary. These objectives are critical to success because a lack of compliance will result in:

  • At minimum, a loss of customer trust and damage to your reputation.
  • At worst, legal and financial ramifications that could result in your organization paying hefty fees or being blocked from working in a certain geography or market.

Areas where compliance is a key business concern:

  • Countries with data/privacy laws like GDPR, the California Consumer Privacy Act, and more
  • Markets with heavy regulations, such as healthcare or finance
  • Clients with high confidentiality standards

These areas almost always demand a high level of compliance. Importantly, IT compliance can apply in domains other than IT security. Complying with contract terms, for example, might be about how available or reliable your services are, not only if they’re secure.

When is compliance necessary?

When you need to comply with certain regulations depends on many factors:

  • Your industry
  • Your company’s size or location
  • The customers you serve
  • Many other factors

Many laws outline very specific criteria that a business must meet—but they don’t apply to everyone. For example:

  • HIPAA is a U.S. law that defines how the healthcare industry protects and shares personal health information.
  • SOX is a financial regulation in the U.S. that applies to a broad spectrum of industries.
  • Payment Card Industry Data Security Standards (PCI-DSS) are a group of security regulations that protect consumer privacy when personal credit card information is transmitted, stored, and processed by businesses.
  • ISO 27001, on the other hand, is not a law but a standard that companies can opt into by aligning with these InfoSec standards.

Other standards you must comply might not be law or opt-in—some might originate directly with your customers. A high-profile client may require the business to implement very strict security controls in order to award their contract.

Compliance & GRC

Compliance is only one section of a greater scheme of ensuring an organization is compliant with industry, government, or other regulations. These are summed up in the acronym GRC:

  • Governance. Before compliance is possible, organizations need to make plans that are directed and controlled. Setting direction, monitoring developments, and evaluating outcomes are all key to effective governance.
  • Risk. Danger is everywhere and it needs to be recognized. Compliance needs for risks to be identified, analyzed, and controlled as much as is possible.
  • Compliance. When appropriately governed and risk-managed, an organization can evaluate its compliance. Standards are not just set but evaluated and managed at every step.

Comparing IT security & IT compliance

Security is the practice of implementing effective technical controls to protect company assets. Compliance is the application of that practice to meet a third party’s regulatory or contractual requirements.

Comparing IT security & IT compliance

Here is a brief rundown of the key differences between these two concepts. Security is:

  • Practiced for its own sake, not to satisfy a third party’s needs
  • Driven by the need to protect against constant threats to an organization’s assets
  • Never truly finished and should be continuously maintained and improved

Compliance is:

  • Practiced to satisfy external requirements and facilitate business operations
  • Driven by business needs (rarely technical needs)
  • “Done” when the third party is satisfied

At first glance, it’s easy to see that a strictly compliance-based approach to IT security falls short of the mark. This attitude focuses on doing only the minimum required in order to satisfy requirements, which would quickly lead to serious problems in an age of increasingly complex malware and cyberattacks.

How security & compliance work together

We can all agree that businesses need an effective IT Security program. Robust security protocols and procedures enable your business to go beyond checking boxes and start employing truly effective practices to protect its most critical assets.

This is where concepts like defense-in-depth, layered security systems, and user awareness training come in, along with regular tests by external parties to ensure that these controls are actually working. If a business were focused solely on meeting compliance standards that don’t require these critical functions, they would be leaving the door wide open to attackers who prey on low-hanging fruit.

While compliance is often seen as doing only the bare minimum, it’s useful in its own right. Compliance is an asset to the business—it isn’t just hoops you must jump through. Becoming compliant with a respected industry standard like ISO:27001 can:

  • Bolster your organization’s reputation
  • Garner new business with security-minded customers

Compliance can also help to identify any gaps in your existing IT security program which might not have otherwise been identified outside of a compliance audit. Additionally, compliance helps organizations to have a standardized security program, as opposed to one where controls may be chosen at the whim of the administrator.

Secure & comply: both business-critical

The astute security professional will see that security and compliance go hand in hand and complement each other in areas where one may fall short.

  • Compliance establishes a comprehensive baseline for an organization’s security posture.
  • Diligent security practices build on that baseline to ensure that the business is covered from every angle.

With an equal focus on both of these concepts, a business will be empowered to not only meet the standards for its market but also demonstrate that it goes above and beyond in its commitment to digital security.

Related reading

]]>