Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 70

What are microservices?

Microservices architecture is an approach in which a single application is composed


of many loosely coupled and independently deployable smaller services.

Microservices (or microservices architecture) are a cloud native architectural


approach in which a single application is composed of many loosely coupled and
independently deployable smaller components, or services. These services typically

have their own technology stack, inclusive of the database and data management
model;
communicate with one another over a combination of REST APIs, event streaming,
and message brokers; and
are organized by business capability, with the line separating services often referred to
as a bounded context.

While much of the discussion about microservices has revolved around architectural
definitions and characteristics, their value can be more commonly understood through
fairly simple business and organizational benefits:

Code can be updated more easily - new features or functionality can be added without
touching the entire application
Teams can use different stacks and different programming languages for different
components.
Components can be scaled independently of one another, reducing the waste and cost
associated with having to scale entire applications because a single feature might be
facing too much load.

Microservices might also be understood by what they are not. The two comparisons
drawn most frequently with microservices architecture are monolithic architecture
and service-oriented architecture (SOA).

The difference between microservices and monolithic architecture is that


microservices compose a single application from many smaller, loosely coupled
services as opposed to the monolithic approach of a large, tightly coupled application.

The differences between microservices and SOA can be a bit less clear. While
technical contrasts can be drawn between microservices and SOA, especially around
the role of the enterprise service bus (ESB), it’s easier to consider the difference as
one of scope. SOA was an enterprise-wide effort to standardize the way all web
services in an organization talk to and integrate with each other, whereas
microservices architecture is application-specific.
The post "SOA vs. Microservices: What's the Difference?" goes into further details.

For more on the differences between microservices and monolithic architecture, watch
this video:

Microservices in the enterprise, 2021 - New IBM research reveals the benefits and
challenges of microservices adoption.

Register for download the e-book

How microservices benefit the organization


Microservices are likely to be at least as popular with executives and project leaders
as with developers. This is one of the more unusual characteristics of microservices
because architectural enthusiasm is typically reserved for software development
teams. The reason for this is that microservices better reflect the way many business
leaders want to structure and run their teams and development processes.

Put another way, microservices are an architectural model that better facilitates a
desired operational model. In a recent IBM survey of over 1,200 developers and IT
executives, 87% of microservices users agreed that microservices adoption is worth
the expense and effort. You can explore more perspectives on the benefits and
challenges of microservices using the interactive tool.

Here are just a few of the enterprise benefits of microservices.

Independently deployable

Perhaps the single most important characteristic of microservices is that because the
services are smaller and independently deployable, it no longer requires an act of
Congress in order to change a line of code or add a new feature in application.

Microservices promise organizations an antidote to the visceral frustrations associated


with small changes taking huge amounts of time. It doesn’t require a Ph.D. in
computer science to see or understand the value of an approach that better facilitates
speed and agility.

But speed isn’t the only value of designing services this way. A common emerging
organizational model is to bring together cross-functional teams around a business
problem, service, or product. The microservices model fits neatly with this trend
because it enables an organization to create small, cross-functional teams around one
service or a collection of services and have them operate in an agile fashion.

Microservices' loose coupling also builds a degree of fault isolation and better
resilience into applications. And the small size of the services, combined with their
clear boundaries and communication patterns, makes it easier for new team members
to understand the code base and contribute to it quickly—a clear benefit in terms of
both speed and employee morale.

Right tool for the job

In traditional n-tier architecture patterns, an application typically shares a common


stack, with a large, relational database supporting the entire application. This approach
has several obvious drawbacks—the most significant of which is that every
component of an application must share a common stack, data model and database
even if there is a clear, better tool for the job for certain elements. It makes for bad
architecture, and it’s frustrating for developers who are constantly aware that a better,
more efficient way to build these components is available.

By contrast, in a microservices model, components are deployed independently and


communicate over some combination of REST, event streaming and message brokers
—so it’s possible for the stack of every individual service to be optimized for that
service. Technology changes all the time, and an application composed of multiple,
smaller services is much easier and less expensive to evolve with more desirable
technology as it becomes available.

Precise scaling

With microservices, individual services can be individually deployed—but they can


be individually scaled, as well. The resulting benefit is obvious: Done correctly,
microservices require less infrastructure than monolithic applications because they
enable precise scaling of only the components that require it, instead of the entire
application in the case of monolithic applications.

There are challenges, too

Microservices' significant benefits come with significant challenges. Moving from


monolith to microservices means a lot more management complexity - a lot more
services, created by a lot more teams, deployed in a lot more places. Problems in one
service can cause, or be caused by, problems in other services. Logging data (used for
monitoring and problem resolution) is more voluminous, and can be inconsistent
across services. New versions can cause backward compatibility issues. Applications
involve more network connections, which means more opportunities for latency and
connectivity issues. A DevOps approach (as you'll read below) can address many of
these issues, but DevOps adoption has challenges of its own.

Nevertheless, these challenges aren't stopping non-adopters from adopting


microservices - or adopters from deepening their microservices commitments. New
IBM survey data reveals that 56% of current non-users are likely or very likely to
adopt microservices within the next two years, and 78% of current microservices
users will likely increase the time, money and effort they've invested in microservices.

Modernize your applications for interoperability and ROI - Enhance the value of your
existing apps and reduce the cost to maintain them.

Learn more

Microservices both enable, and require, DevOps


Microservices architecture is often described as optimized for DevOps and continuous
integration/continuous delivery (CI/CD), and in the context of small services that can
be deployed frequently, it’s easy to understand why. 

But another way of looking at the relationship between microservices and DevOps is
that microservices architectures actually require DevOps in order to be successful.
While monolithic applications have a range of drawbacks that have been discussed
earlier in this article, they have the benefit of not being a complex distributed system
with multiple moving parts and independent tech stacks. In contrast, given the
massive increase in complexity, moving parts and dependencies that come with
microservices, it would be unwise to approach microservices without significant
investments in deployment, monitoring and lifecycle automation.

Andrea Crawford provides a deeper dive on DevOps in the following video:

Featured products

Red Hat OpenShift on IBM Cloud

IBM Cloud Kubernetes Service

IBM Cloud Code Engine


Key enabling technologies and tools
While just about any modern tool or language can be used in a microservices
architecture, there are a handful of core tools that have become essential and
borderline definitional to microservices:

Containers, Docker, and Kubernetes

One of the key elements of a microservice is that it’s generally pretty small. (There is
no arbitrary amount of code that determines whether something is or isn’t a
microservice, but “micro” is right there in the name.)

When Docker ushered in the modern container era in 2013, it also introduced the


compute model that would become most closely associated with microservices.
Because individual containers don’t have the overhead of their own operating system,
they are smaller and lighter weight than traditional virtual machines and can spin up
and down more quickly, making them a perfect match for the smaller and lighter
weight services found within microservices architectures.

With the proliferation of services and containers, orchestrating and managing large
groups of containers quickly became one of the critical challenges. Kubernetes, an
open source container orchestration platform, has emerged as one of the most popular
orchestration solutions because it does that job so well.

In the video "Kubernetes Explained," Sai Vennam gives a comprehensive view of all
things Kubernetes:

API gateways

Microservices often communicate via API, especially when first establishing state.
While it’s true that clients and services can communicate with one another directly,
API gateways are often a useful intermediary layer, especially as the number of
services in an application grows over time. An API gateway acts as a reverse proxy
for clients by routing requests, fanning out requests across multiple services, and
providing additional security and authentication.

There are multiple technologies that can be used to implement API gateways,
including API management platforms, but if the microservices architecture is being
implemented using containers and Kubernetes, the gateway is typically implemented
using Ingress or, more recently, Istio.
Messaging and event streaming

While best practice might be to design stateless services, state nonetheless exists and
services need to be aware of it. And while an API call is often an effective way of
initially establishing state for a given service, it’s not a particularly effective way of
staying up to date. A constant polling, “are we there yet?” approach to keeping
services current simply isn’t practical.

Instead, it is necessary to couple state-establishing API calls with messaging or event


streaming so that services can broadcast changes in state and other interested parties
can listen for those changes and adjust accordingly. This job is likely best suited to a
general-purpose message broker, but there are cases where an event streaming
platform, such as Apache Kafka, might be a good fit. And by combining
microservices with event driven architecture developers can build distributed, highly
scalable, fault tolerant and extensible systems that can consume and process very
large amounts of events or information in real-time.

Serverless

Serverless architectures take some of the core cloud and microservices patterns to


their logical conclusion. In the case of serverless, the unit of execution is not just a
small service, but a function, which can often be just a few lines of code. The line
separating a serverless function from a microservice is a blurry one, but functions are
commonly understood to be even smaller than a microservice.

Where serverless architectures and Functions-as-a-Service (FaaS) platforms share


affinity with microservices is that they are both interested in creating smaller units of
deployment and scaling precisely with demand.

Microservices and cloud services


Microservices are not necessarily exclusively relevant to cloud computing but there
are a few important reasons why they so frequently go together—reasons that go
beyond microservices being a popular architectural style for new applications and the
cloud being a popular hosting destination for new applications.

Among the primary benefits of microservices architecture are the utilization and cost
benefits associated with deploying and scaling components individually. While these
benefits would still be present to some extent with on-premises infrastructure, the
combination of small, independently scalable components coupled with on-demand,
pay-per-use infrastructure is where real cost optimizations can be found.

Secondly, and perhaps more importantly, another primary benefit of microservices is


that each individual component can adopt the stack best suited to its specific job.
Stack proliferation can lead to serious complexity and overhead when you manage it
yourself but consuming the supporting stack as cloud services can dramatically
minimize management challenges. Put another way, while it’s not impossible to roll
your own microservices infrastructure, it’s not advisable, especially when just starting
out.

Common patterns
Within microservices architectures, there are many common and useful design,
communication, and integration patterns that help address some of the more common
challenges and opportunities, including the following:

 Backend-for-frontend (BFF) pattern: This pattern inserts a layer between the


user experience and the resources that experience calls on. For example, an app
used on a desktop will have different screen size, display, and performance
limits than a mobile device. The BFF pattern allows developers to create and
support one backend type per user interface using the best options for that
interface, rather than trying to support a generic backend that works with any
interface but may negatively impact frontend performance.
 Entity and aggregate patterns: An entity is an object distinguished by its
identity. For example, on an e-commerce site, a Product object might be
distinguished by product name, type, and price. An aggregate is a collection of
related entities that should be treated as one unit. So, for the e-commerce site,
an Order would be a collection (aggregate) of products (entities) ordered by a
buyer. These patterns are used to classify data in meaningful ways.
 Service discovery patterns: These help applications and services find each
other. In a microservices architecture, service instances change dynamically
due to scaling, upgrades, service failure, and even service termination. These
patterns provide discovery mechanisms to cope with this transience. Load
balancing may use service discovery patterns by using health checks and
service failures as triggers to rebalance traffic.
 Adapter microservices patterns: Think of adapter patterns in the way you
think of plug adapters that you use when you travel to another country. The
purpose of adapter patterns is to help translate relationships between classes or
objects that are otherwise incompatible. An application that relies on third-
party APIs might need to use an adapter pattern to ensure the application and
the APIs can communicate.
 Strangler application pattern: These patterns help manage refactoring a
monolithic application into microservices applications. The colorful name
refers to how a vine (microservices) slowly and over time overtakes and
strangles a tree (a monolithic application).

You can learn more about these patterns in "How to use development patterns with
microservices (part 4)." IBM Developer also provides a lot of information if you want
to learn about other microservices code patterns.

Anti-patterns
While there are many patterns for doing microservices well, there are an equally
significant number of patterns that can quickly get any development team into trouble.
Some of these—rephrased as microservices “don’ts”—are as follows:

 The first rule of microservices is, don’t build microservices: Stated more


accurately, don’t start with microservices. Microservices are a way to manage
complexity once applications have gotten too large and unwieldly to be updated
and maintained easily. Only when you feel the pain and complexity of the
monolith begin to creep in is it worth considering how you might refactor that
application into smaller services. Until you feel that pain, you don’t even really
have a monolith that needs refactoring.
 Don’t do microservices without DevOps or cloud services: Building out
microservices means building out distributed systems, and distributed systems
are hard (and they are especially hard if you make choices that make it even
harder). Attempting to do microservices without either a) proper deployment
and monitoring automation or b) managed cloud services to support your now
sprawling, heterogenous infrastructure, is asking for a lot of unnecessary
trouble. Save yourself the trouble so you can spend your time worrying about
state. 
 Don’t make too many microservices by making them too small: If you go
too far with the “micro” in microservices, you could easily find yourself with
overhead and complexity that outweighs the overall gains of a microservice
architecture. It’s better to lean toward larger services and then only break them
apart when they start to develop characteristics that microservices solve for—
namely that it’s becoming hard and slow to deploy changes, a common data
model is becoming overly complex, or that different parts of the service have
different load/scale requirements.
 Don’t turn microservices into SOA: Microservices and service-oriented
architecture (SOA) are often conflated with one another, given that at their
most basic level, they are both interested in building reusable individual
components that can be consumed by other applications. The difference
between microservices and SOA is that microservices projects typically involve
refactoring an application so it’s easier to manage, whereas SOA is concerned
with changing the way IT services work enterprise-wide. A microservices
project that morphs into an SOA project will likely buckle under its own
weight.
 Don’t try to be Netflix: Netflix was one of the early pioneers of microservices
architecture when building and managing an application that accounted for one-
third of all Internet traffic—a kind of perfect storm that required them to build
lots of custom code and services that are unnecessary for the average
application. You’re much better off starting with a pace you can handle,
avoiding complexity, and using as many off-the-shelf tools as you possible.

Tutorials: Build microservices skills


If you're ready to learn more about how to use microservices, or if you need to build
on your microservices skills, try one of these tutorials:

 Introduction to microservices
 Quick lab: Create highly scalable web application microservices with Node.js
 Get started with Java microservices using Spring Boot and Cloudant
 Create, run, and deploy Spring microservices in five minutes
 Microservices, SOA, and APIs: Friends or enemies?

Microservices and IBM Cloud


Microservices enable innovative development at the speed of modern business. Learn
how to leverage the scalability and flexibility of the cloud by deploying independent
microservices into cloud environments. See what it would be like to modernize your
applications with help from IBM. 

Take the next step:


 Free your development teams by relying on automated iteration with help
from IBM cloud native development tools.
 Learn more about managed Kubernetes by getting started with Red
Hat OpenShift on IBM Cloud or the IBM Cloud Kubernetes Service. Also,
check out IBM Cloud Code Engine for more about serverless computing.
 Microservices are just as much about team process and organization as
technology. Strategically plan your DevOps approach with help from IBM
DevOps.

What is DevOps?
DevOps speeds delivery of higher quality software by combining and automating the
work of software development and IT operations teams.

By definition, DevOps outlines a software development process and an organizational


culture shift that speeds the delivery of higher quality software by automating and
integrating the efforts of development and IT operations teams – two groups that
traditionally practiced separately from each other, or in silos.

In practice, the best DevOps processes and cultures extend beyond development and
operations to incorporate inputs from all application stakeholders - including platform
and infrastructure engineering, security, compliance, governance, risk management,
line-of-business, end-users and customers - into the software development lifecycle. 

DevOps represents the current state of the evolution of software delivery cycles
during the past 20+ years, from giant application-wide code releases every several
months or even years, to iterative smaller feature or functional updates released as
frequently as every day or several times per day.

Ultimately, DevOps is about meeting software users’ ever-increasing demand for


frequent, innovative new features and uninterrupted performance and availability.

Microservices in the enterprise, 2021 - New IBM research reveals the benefits and
challenges of microservices adoption.

Download the ebook

How we got to DevOps


Until just before 2000, most software was developed and updated using waterfall
methodology, a linear approach to large-scale development projects. Software
development teams would spend months developing large bodies of new code that
impacted most or all of the application. Because the changes were so extensive, they
spent several more months integrating that new code into the code base. 

Next, quality assurance (QA), security and operations teams would spend still more
months testing the code. The result was months or even years between software
releases, and often several significant patches or bug fixes between releases as well.
And this “big bang” approach to feature delivery was often characterized by complex
and risky deployment plans, hard to schedule interlocks with upstream and
downstream systems, and IT’s “great hope” that the business requirements had not
changed drastically in the months leading up to production “go live.”

To speed development and improve quality, development teams began adopting agile
software development methodologies, which are iterative rather than linear and focus
on making smaller, more frequent updates to the application code base. Chief among
these methodologies are continuous integration and continuous delivery, or CI/CD. In
CI/CD smaller chunks of new code are merged into the code base every one or two
weeks, and then automatically integrated, tested and prepared for deployment to the
production environment. Agile evolved the “big bang” approach into a series of
“smaller snaps” which also compartmentalized risks.

The more effectively these agile development practices accelerated software


development and delivery, the more they exposed still-siloed IT operations - system
provisioning, configuration, acceptance testing, management, monitoring - as the next
bottleneck in the software delivery lifecycle. 

So DevOps grew out of agile. It added new processes and tools that extend the
continuous iteration and automation of CI/CD to the rest of the software delivery
lifecycle. And it implemented close collaboration between development and
operations at every step in the process.

Featured products

UrbanCode Deploy

UrbanCode Velocity

Rational Test Workbench

IBM Architecture Room Live


IBM Cloud Pak for Watson AIOps

How DevOps works: The DevOps lifecycle


The DevOps lifecycle (sometimes called the continuous delivery pipeline, when
portrayed in a linear fashion) is a series of iterative, automated development
processes, or workflows, executed within a larger, automated and iterative
development lifecycle designed to optimize the rapid delivery of high-quality
software. The name and number of workflows can differ depending on whom you ask,
but they typically boil down to these six:

 Planning (or ideation). In this workflow, teams scope out new features and
functionality in the next release, drawing from prioritized end-user feedback
and case studies, as well as inputs from all internal stakeholders. The goal in
the planning stage is to maximize the business value of the product by
producing a backlog of features that when delivered produce a desired
outcome that has value.
 Development. This is the programming step, where developers test, code, and
build new and enhanced features, based on user stories and work items in the
backlog. A combination of practices such as test-driven development (TDD),
pair programming, and peer code reviews, among others are common.
Developers often use their local workstations to perform the “inner loop” of
writing and testing code before sending it down the continuous delivery
pipeline.
 Integration (or build, or continuous Integration and continuous delivery
(CI/CD). As noted above, in this workflow the new code is integrated into the
existing code base, then tested and packaged into an executable for
deployment. Common automation activities include merging code changes
into a “master” copy, checking out that code from a source code repository,
and automating the compile, unit test and packaging into an executable. Best
practice is to store the output of the CI phase in a binary repository, for the
next phase.
 Deployment (usually called continuous deployment). Here the runtime build
output (from integration) is deployed to a runtime environment - usually a
development environment where runtime tests are executed for quality,
compliance and security. If errors or defects are found, developers have a
chance to intercept and remediate any problems before any end users see
them. There are typically environments for development, test, and
production, with each environment requiring progressively “stricter” quality
gates. A good practice for deployment to a production environment is typically
to deploy first to a subset of end users, and then eventually to all users once
stability is established.
 Operations. If getting features delivered to a production environment is
characterized as “Day 1”, then once features are running in production “Day
2” operations occur. Monitoring feature performance, behavior, and
availability ensures that the features are able to provide value add to end
users. Operations ensures that features are running smoothly and that there
are no interruptions in service - by making sure the network, storage,
platform, compute and security posture are all healthy! If something goes
wrong, operations ensures incidents are identified, the proper personnel are
alerted, problems are determined, and fixes are applied.
 Learning (sometimes called continuous feedback). This is the gathering of
feedback from end users and customers on features, functionality,
performance and business value to take back to planning for enhancements
and features the next release. This would also include any learning and
backlog items from the operations activities, that could empower developers
to proactively avoid any past incidents that could happen again in the future.
This is the point where the “wraparound” to the Planning phase happens and
we “continuously improve!”

Three other important continuous workflows occur between these workflows:

Continuous testing:  Classical DevOps lifecycles include a discrete “test” phase that


occurs between integration and deployment. However, DevOps has advanced such
that certain elements of testing can occur in planning (behavior-driven development),
development (unit testing, contract testing), integration (static code scans, CVE scans,
linting), deployment (smoke testing, penetration testing, configuration testing),
operations (chaos testing, compliance testing), and learning (A/B testing). Testing is a
powerful form of risk and vulnerability identification and provides an opportunity for
IT to accept, mitigate, or remediate risks.

Security: While waterfall methodologies and agile implementations 'tack on' security


workflows after delivery or deployment, DevOps strives to incorporate security from
the start (Planning) - when security issues are easiest and least expensive to address -
and continuously throughout the rest of the development cycle. This approach to
security is referred to as shifting left (which is easier to understand if you look Figure
1). Some organizations have had less success shifting left than others, which led to the
rise of DevSecOps (see below).
Compliance. Regulatory compliance (governance and risk) are also best addressed
early and throughout the development lifecycle. Regulated industries are often
mandated to provide a certain level of observability, traceability and access of how
features are delivered and managed in their runtime operational environment. This
requires planning, development, testing, and enforcement of policies in the continuous
delivery pipeline and in the runtime environment. Auditability of compliance
measures is extremely important for proving compliance to 3rd party auditors.

DevOps culture
It's generally accepted that DevOps methods can't work without a commitment to
DevOps culture, which can be summarized as a different organizational and technical
approach to software development.

At the organizational level, DevOps requires continuous communication,


collaboration and shared responsibility among all software delivery stakeholders -
software development and IT operations teams for certain, but also security,
compliance, governance, risk and line-of-business teams - to innovate quickly and
continually, and to build quality into software from the start.

In most cases the best way to accomplish this is to break down these silos and
reorganize them into cross-functional, autonomous DevOps teams that can work on
code projects from start to finish - planning to feedback - without making handoffs to,
or waiting for approvals from, other teams. When put in the context of agile
development, the shared accountability and collaboration are the bedrock of having a
shared product focus that has a valuable outcome.

At the technical level, DevOps requires a commitment to automation that keeps


projects moving within and between workflows, and
to feedback and measurement that enable teams to continually accelerate cycles and
improve software quality and performance.

DevOps tools: Building a DevOps toolchain


The demands of DevOps and DevOps culture put a premium on tooling that supports
asynchronous collaboration, seamlessly integrates DevOps workflows, and automates
the entire DevOps lifecycle as much as possible. Categories of DevOps tools include:
 Project management tools - tools that enable teams to build a backlog of user
stories (requirements) that form coding projects, break them down into
smaller tasks and track the tasks through to completion. Many support agile
project management practices such, as Scrum, Lean and Kanban, that
developers bring to DevOps. Popular open source options include GitHub
Issues and Jira.
 Collaborative source code repositories - version-controlled coding
environments that that let multiple developers to work on the same code
base. Code repositories should integrate with CI/CD, testing and security tools,
so that when code is committed to the repository it can automatically move to
the next step. Open source code repositories include GiHub and GitLab.
 CI/CD pipelines - tools that automate code checkout, building, testing and
deployment. Jenkins is the most popular open source tool in this category;
many previously open-source alternatives, such as CircleCI, are now available
in commercial versions only. When it comes to continuous deployment (CD)
tools, Spinnaker straddles between application and infrastructure as code
layers. ArgoCD is another popular open source choice for Kubernetes native
CI/CD.
 Test automation frameworks - these include software tools, libraries and best
practices for automating unit, contract, functional, performance, usability,
penetration and security tests. The best of these tools support multiple
languages; some use artificial intelligence (AI) to automatically reconfigure
tests in response to code changes. The expanse of test tools and frameworks
is far and wide! Popular open source test automation frameworks include
Selenium, Appium, Katalon, Robot Framework, and Serenity (formerly known
as Thucydides).
 Configuration management (infrastructure as code) tools - these enable
DevOps engineers to configure and provision fully versioned and fully
documented infrastructure by executing a script. Open source options include
Ansible (Red Hat), Chef, Puppet and Terraform. Kubernetes performs the same
function for containerized applications (see 'DevOps and cloud-native
development,' below).
 Monitoring tools - these help DevOps teams identify and resolve system
issues; they also gather and analyze data in real time to reveal how code
changes impact application performance. Open source monitoring tools
include Datadog, Nagios, Prometheus and Splunk.
 Continuous feedback tools - tools that gather feedback from users, either
through heatmapping (recording users' actions on screen), surveys, or self-
service issue ticketing.
DevOps and cloud-native development
Cloud-native is an approach to building applications that leverage foundational cloud
computing technologies. The goal of cloud-native is to enable a consistent and
optimal application development, deployment, management and performance across
public, private and multicloud environments. 

Today, cloud-native applications are typically

 Built using microservices - loosely-coupled, independently deployable


components that have their own self-contained stack, and communicate with
each other via REST APIs, event streaming or message brokers.
 Deployed in containers - executable units of code that contain all the code,
runtimes and operating system dependencies required to run the application.
(For most organizations, 'containers' is synonymous
with Docker containers, but other container types exist.)
 Operated (at scale) using Kubernetes, an open-source container orchestration
platform for scheduling and automating the deployment, management and
scaling of containerized applications.

In many ways, cloud-native development and DevOps are made for each other. 

For example, developing and updating microservices - that is, the iterative delivery of
small units of code to a small code base - is a perfect fit for DevOps rapid release and
management cycles. And it would be difficult to deal with the complexity of a
microservices architecture without DevOps deployment and operation. A recent IBM
survey of developers and IT executives found that 78% of current microservices users
expect to increase the time, money and effort they’ve invested in the architecture, and
56% of non-users are likely to adopt microservices within the next two years. 

By packaging and permanently fixing all OS dependencies, containers enable rapid


CI/CD and deployment cycles, because all integration, testing and deployment occurs
in the same environment. And Kubernetes orchestration performs the same continuous
configuration tasks for containerized applications as Ansible, Puppet and Chef
perform for non-containerized applications.

Most leading cloud computing providers - including AWS, Google, Microsoft Azure,
and IBM Cloud - offer some sort of managed DevOps pipeline solution.
What is DevSecOps?
DevSecOps is DevOps that continuously integrates and automates security throughout
the DevOps lifecycle - from start to finish, from planning through feedback and back
to planning again.

Another way to put this is that DevSecOps is what DevOps was supposed to be from
the start. But two of the early significant (and for a time insurmountable) challenges
of DevOps adoption were integrating security expertise into cross-functional teams (a
cultural problem), and implementing security automation into the DevOps lifecycle (a
technical issue). Security came to be perceived as the "Team of 'No,'" and as an
expensive bottleneck in many DevOps practices.

DevSecOps emerged as a specific effort to integrate and automate security as


originally intended. In DevSecOps, security is a “first class” citizen and stakeholder
along with development and Operations, and brings security into the development
process with a product focus.

Watch 'What is DevSecOps?' to learn more about DevSecOps principles, benefits and


use cases:

DevOps and site reliability engineering (SRE)


Site reliability engineering (SRE) uses software engineering techniques to automate
IT operations tasks - e.g. production system management, change management,
incident response, even emergency response - that might otherwise be performed
manually by systems administrators. SRE seeks to transform the classical system
administrator into an engineer.

The ultimate goal of SRE is similar to the goal of DevOps, but more specific: SRE
aims to balance an organization's desire for rapid application development with its
need to meet performance and availability levels specified in service level agreements
(SLAs) with customers and end-users. 

Site reliability engineers achieve this balance by determining an acceptable level of


operational risk caused by applications - called an 'error budget' - and by automating
operations to meet that level. 
On a cross-functional DevOps team, SRE can serve as a bridge between development
and operations, providing the metrics and automation the team needs to push code
changes and new features through the DevOps pipeline as quickly as possible, without
'breaking' the terms of the organizations SLAs. 

Learn more about site reliability engineering

DevOps and IBM Cloud

DevOps requires collaboration across business, development, and operation


stakeholders to expeditiously deliver and run reliable software. Organizations that use
DevOps tools and practices while transforming their culture build a powerful
foundation for digital transformation, and for modernizing their applications as the
need for automation widens across business and IT operations.

A move toward greater automation should start with small, measurably successful
projects, which you can then scale and optimize for other processes and in other parts
of your organization.

Working with IBM, you’ll have access to AI-powered automation capabilities,


including prebuilt workflows, to make every IT services process more intelligent,
freeing up teams to focus on the most important IT issues and accelerate innovation.

Take the next step:

 See how you can place AI at the core of your entire IT operations toolchain
with IBM Cloud Pak for Watson AIOps.
 Explore additional IBM tools to support a DevOps approach, including IBM
Architecture Room Live, IBM Rational Test Workbench, IBM UrbanCode
Deploy, and IBM UrbanCode Velocity.
 Discover how IBM DevOps can help you shorten releases, improve reliability
and stay ahead of the competition.
 Build DevOps skills through our “Introduction to DevOps for Cloud Solutions”
course contained within the Cloud Architect Professional learning curriculum
or the courses on DevOps capabilities and IBM Cloud Continuous Delivery
Toolchain contained within the Cloud Developer Professional learning
curriculum.
 Register to download the Gartner report and discover how to future-proof
your IT operations with AI.
 Download the IBM Cloud infographic that shows the benefits of AI-powered
automation for IT operations. (PDF, 467 KB)
 Check out the eBook, Putting the Ops into DevOps for Dummies (PDF, 2.1 MB).
 Read about the five “must-have’s” for automation success (link resides outside
IBM) in this HFS Research report.

Container Orchestration
Container orchestration automates and simplifies provisioning, and deployment and
management of containerized applications.

Featured products

Red Hat OpenShift on IBM Cloud

IBM Cloud Satellite

IBM Cloud Code Engine

IBM Cloud Kubernetes Service

What is container orchestration? 


Container orchestration automates the provisioning, deployment, networking, scaling,
availability, and lifecycle management of containers. Today, Kubernetes is the most
popular container orchestration platform, and most leading public cloud providers -
including Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud and
Microsoft Azure - offer managed Kubernetes services. Other container orchestration
tools include Docker Swarm and Apache Mesos.

More on containers, and why they need orchestration


Containers are lightweight, executable application components that combine
application source code with all the operating system (OS) libraries
and dependencies required to run the code in any environment. 

The ability to create containers has existed for decades, but it became widely available
in 2008 when Linux included container functionality within its kernel, and widely
used with the arrival of the Docker open-source containerization platform in 2013.
(Docker is so popular that "Docker containers" and "containers" are often used
interchangeably.) 

Because they are smaller, more resource-efficient and more portable than virtual
machines (VMs), containers - and more specifically,
containerized microservices or serverless functions - have become the de
facto compute units of modern cloud-native applications. (For more on the benefits of
containers see the interactive data visualization below)

In small numbers, containers are easy enough to deploy and manage manually. But in
most organizations the number of containerized applications is growing rapidly, and
managing them at scale - especially as part of a continuous integration/continuous
delivery (CI/CD) or DevOps pipeline - is impossible without automation.

Enter container orchestration, which automates the operations tasks


around deploying and running containerized applications and services. According
to the latest IBM research (PDF, 1.4MB), 70% of developers using containers report
using container orchestration solution, and 70% of those report using a fully-managed
(cloud-managed) container orchestration service at their organization.

Download the full report: Containers in the enterprise

How container orchestration works


While there are differences in methodologies and capabilities across tools, container
orchestration is essentially a three-step process (or cycle, when part of an iterative
agile or DevOps pipeline).

Most container orchestration tools support a declarative configuration model: A


developer writes a configuration file (in YAML or JSON depending on the tool) that
defines a desired configuration state, and the orchestration tool runs the file uses its
own intelligence to achieve that state. The configuration file typically

 Defines which container images make up the application, and where they are


located (in what registry)
 Provisions the containers with storage and other resources
 Defines and secures the network connections between containers
 Specifies versioning (for phased or canary rollouts)
The orchestration tool schedules deployment of the containers (and replicas of the
containers, for resiliency) to a host, choosing the best host based on available CPU
capacity, memory, or other requirements or constraints specified in the configuration
file. 

Once the containers are deployed the orchestration tool manages the lifecycle of
the containerized application based on the container definition file (very often a
Dockerfile). This includes 

 Managing scalability (up and down), load balancing, and


resource allocation among the containers; 
 Ensuring availability and performance by relocating the containers to another
host in the event an outage or a shortage of system resources
 Collecting and storing log data and other telemetry used to monitor the health
and performance of the application.

Benefits of container orchestration

It's probably clear that the chief benefit of container orchestration is automation - and
not only only because it reduces greatly the effort and complexity of managing a
large containerized application estate. By automating operations, orchestration
supports an agile or DevOps approach that allows teams to develop and deploy in
rapid, iterative cycles and release new features and capabilities faster.

In addition, an orchestration tool's intelligence can enhance or extend many of the


inherent benefits of containerization. For example, automated host selection and
resource allocation, based on declarative configuration, maximizes efficient use of
computing resources; automated health monitoring and relocation of containers
maximizes availability.

Kubernetes
As noted above, Kubernetes is the most popular container orchestration platform.
Together with other tools in the container ecosystem, Kubernetes enables a company
to deliver a highly productive platform-as-a-service (PaaS) that addresses many of the
infrastructure- and operations-related tasks and issues around cloud-native application
development, so that development teams can focus exclusively on coding and
innovation.
Kubernetes’ advantages over other orchestration solutions are largely a result of its
more comprehensive and sophisticated functionality in several areas, including:

 Container deployment. Kubernetes deploys a specified number of containers


to a specified host and keeps them running in a desired state. 
 Rollouts. A rollout is a change to a deployment. Kubernetes lets you initiate,
pause, resume, or roll back rollouts. 
 Service discovery. Kubernetes can automatically expose a container to the
internet or to other containers using a DNS name or IP address. 
 Storage provisioning. Developers can set Kubernetes to mount persistent
local or cloud storage for your containers as needed. 
 Load balancing and scalability. When traffic to a container
spikes, Kubernetes can employ load balancing and scaling to distribute it
across the network to ensure stability and performance. (It also saves
developers the work of setting up a load balancer.)
 Self-healing for high availability. When a container fails, Kubernetes can
restart or replace it automatically. It can also take down containers that don’t
meet your health-check requirements. 
 Support and portability across multiple cloud providers. As noted earlier,
Kubernetes enjoys broad support across all leading cloud providers. This is
especially important for organizations deploying applications to
a hybrid cloud or hybrid multicloud environment.
 Growing ecosystem of open-source tools. Kubernetes also has an ever-
expanding stable of usability and networking tools to enhance its capabilities
via the Kubernetes API. These include Knative, which enables containers to
run as serverless workloads; and Istio, an open source service mesh. 

Learn more about Kubernetes

Container orchestration and IBM Cloud


Containers are ideal for modernizing your applications and optimizing your IT
infrastructure. Container services from IBM Cloud, built on open source technologies
like Kubernetes, can facilitate and accelerate your path to cloud-native application
development, and to an open hybrid cloud approach that integrates the best features
and functions from private cloud, public cloud and on-premises IT infrastructure.

Take the next step:


 Learn how you can deploy highly available, fully managed clusters for
your containerized applications with a single click using Red Hat OpenShift
on IBM Cloud.
 Deploy and manage containerized applications consistently across on-
premises, edge computing and public cloud environments from any vendor
with IBM Cloud Satellite.
 Run container images, batch jobs or source code as a serverless workload - no
sizing, deploying, networking or scaling required - with IBM Cloud Code
Engine.

To get started right away, sign up for an IBM Cloud account.

Cloud Databases

Cloud Databases
Learn more about data organization in the cloud.

What is a Cloud Database?


A cloud database is a database service built and accessed through a cloud platform. It
serves many of the same functions as a traditional database, with the added flexibility
of cloud computing. Users install software on a cloud infrastructure to implement the
database.

Key features
 A database service built and accessed through a cloud platform
 Enables enterprise users to host databases without buying dedicated
hardware
 Can be managed by the user or offered as a service and managed by a
provider
 Can support SQL (including MySQL) or NoSQL databases
 Accessed through a web interface or vendor-provided API
Benefits
Ease of access: Users can access cloud databases from virtually anywhere using a
vendor’s API or web interface.

Scalability: Cloud databases can expand their storage capacities on run-time to


accommodate changing needs. Organizations only pay for what they use.

Disaster recovery: In the event of a natural disaster, equipment failure or power


outage, data is kept secure through backups on remote servers.

See IBM Cloud databases

Considerations
 Control options: Users can opt for a virtual machine image managed like a
traditional database or a provider’s database as a service (DBaaS).
 Database technology: SQL databases are difficult to scale but very common.
NoSQL databases scale more easily but do not work with some applications.
 Security: Most cloud database providers encrypt data and provide other security
measures; organizations should research their options.
 Maintenance: When using a virtual machine image, one should ensure that IT
staffers can maintain the underlying infrastructure.

IBM Cloud database solutions provide a scalable managed


data layer
IBM Cloud database solutions offer a complete portfolio of managed services for data
and analytics — a hybrid, open source-based approach that addresses the data-
intensive needs of application developers, data scientists, and IT architects to deliver
immediate and longer-term benefits.

See "A Brief Overview of the Database Landscape."


An IBM perspective: Data management
Managing engagement and application data for massive networks of mobile users or
remote devices can be a scalability and availability nightmare.

The problem is that most databases require updates to occur in a central “master”
database. This can result in performance bottlenecks and also prevent applications
from running if the connection to the master database is unavailable.

A cloud database such as IBM Cloudant enables you to push database access to the
farthest edge of the network—such as mobile devices, remote facilities, sensors, and
Internet-enabled goods—so you can scale bigger and enable applications to continue
running while offline.

Hybrid databases create a distributed hybrid data cloud for increased performance,
reach, uptime, mobility, and cost savings:

 Start small, grow big.


 Elastically scale on demand.
 Clusters can span multiple data centers.
 Manage your cloud yourself or let a provider manage it for you.
 Mix and match cloud providers to optimize geographic reach, service level
agreements (SLAs), pricing, and regulatory requirements.

This is the path to hybrid cloud that accommodates growing data management needs,
not infrastructure needs. Organizations can continuously optimize the data layer for
cost, performance, security, and reach. They can break up their data, distribute it, and
move it closer to their users.

For example, financial organizations are embracing the hybrid concept by using the
database as a central repository for all their disparate data sources and then delivering
this financial data in JSON format. This data is then distributed to the database as a
service and replicated to geographic regions across the world.

If a customer in Singapore has to wait more than four seconds for their mobile
application data to be retrieved from a database in New Jersey, that customer is not
likely to use that application again. Database-as-a-service can replicate and distribute
immediately and offer near real-time access to data worldwide.

Cloud databases can collect, deliver, replicate, and push to the edge all your data
using the new hybrid cloud concept. Users no longer have to deploy the dependent
middleware to deliver database requests anywhere in the world. They can connect
applications directly to their database.

What is cloud computing?


Cloud computing, sometimes referred to simply as “cloud,” is the use of computing resources —
servers, database management, data storage, networking, software applications, and special
capabilities such as blockchain and artificial intelligence (AI) — over the internet, as opposed to
owning and operating those resources yourself, on premises. 

Compared to traditional IT, cloud computing offers organizations a host of benefits: the cost-
effectiveness of paying for only the resources you use; faster time to market for mission-critical
applications and services; the ability to scale easily, affordably and — with the right cloud
provider — globally; and much more (see “What are the benefits of cloud computing?”
below). And many organizations are seeing additional benefits from combining public cloud
services purchased from a cloud services provider with private cloud infrastructure they operate
themselves to deliver sensitive applications or data to customers, partners and employees.

Increasingly, “cloud computing” is becoming synonymous with “computing.” For example, in a


2019 survey of nearly 800 companies, 94% were using some form of cloud computing (link
resides outside IBM). Many businesses are still in the first stages of their cloud journey, having
migrated or deployed about 20% of their applications to the cloud, and are working out the
unique security, compliance and geographic implications of moving their remaining mission-
critical applications. But move they will: Industry analyst Gartner predicts that more than half of
companies using cloud today will move to an all-cloud infrastructure by next year (2021) (link
resides outside IBM).

A brief history of cloud computing


Cloud computing dates back to the 1950s, and over the years, it has evolved through many
phases that were first pioneered by IBM, including grid, utility and on-demand computing.

To read a full history of cloud computing, from mainframes to how virtualization introduced the
modern-day cloud, check out IBM’s history of cloud computing blog post.

What are the benefits of cloud computing?


Compared to traditional IT, cloud computing typically enables:

 Greater cost efficiency. While traditional IT requires you to purchase computing capacity in


anticipation of growth or surges in traffic — capacity that sits unused until you grow or traffic
surges — cloud computing enables you to pay for only the capacity you need, when you need it.
Cloud also eliminates the ongoing expense of purchasing, housing, maintaining and managing
infrastructure on premises.
 Improved agility; faster time to market. On the cloud you can provision and deploy  (“spin
up”)  a server in minutes; purchasing and deploying the same server on premises might take
weeks or months.
 Greater scalability and elasticity. Cloud computing lets you scale workloads automatically —
up or down — in response to business growth or surges in traffic. And working with a cloud
provider that has data centers spread around the world enables you to scale up or down globally
on demand, without sacrificing performance.
 Improved reliability and business continuity. Because most cloud providers have redundancy
built into their global networks, data backup and disaster recovery are typically much easier and
less expensive to implement effectively in the cloud than on premises. Providers who offer
packaged disaster recovery solutions— referred to disaster recovery as a service, or DRaaS —
make the process even easier, more affordable and less disruptive.
 Continually improving performance. The leading cloud service providers regularly update their
infrastructure with the latest, highest-performing computing, storage and networking hardware.
 Better security, built in. Traditionally, security concerns have been the leading obstacle for
organizations considering cloud adoption. But in response to demand, the security offered by
cloud service providers is steadily outstripping on-premises solutions. According to security
software provider McAfee, today 52% of companies experience better security in the cloud than
on premises (link resides outside IBM). Gartner has predicted that by this year (2020),
infrastructure as a service (IaaS) cloud workloads will experience 60% fewer security incidents
than those in traditional data centers (link resides outside IBM).

With the right provider, cloud also offers the added benefit of greater choice and flexibility.
Specifically, a cloud provider that supports open standards and a hybrid multicloud
implementation (see “Multicloud and Hybrid Multicloud” below) gives you the choice and
flexibility to combine cloud and on-premises resources from unlimited vendors into a single,
optimized, seamlessly integrated infrastructure you can manage from a single point of control —
an infrastructure in which each workload runs in the best possible location based on its specific
performance, security, regulatory compliance and cost requirements.

Cloud services
IaaS – Infrastructure as a Service

The original cloud computing service, IaaS provides foundational computing resources —


physical or virtual servers, operating system software, storage, networking infrastructure, data
center space — that you use over an internet connection on a pay-as-you-use basis. IaaS lets you
rent physical IT infrastructure for building your own remote data center on the cloud, instead of
building a data center on premises. IaaS remains the fastest-growing cloud services segment;
according to Gartner, it will grow 24% this year.

Learn more about IaaS

PaaS – Platform as a Service

PaaS provides a complete cloud-based platform for developing, running and managing


applications without the cost, complexity and inflexibility of building and maintaining that
platform on premises. The PaaS provider hosts everything — servers, networks, storage,
operating system software, databases — at its data center. Development teams can use all of it
for a monthly fee based on usage, and can purchase more resources on demand, as needed.

With PaaS you can deploy web and mobile applications to the cloud in minutes, and innovate
faster and more cost-effectively in response to market opportunities and competitive threats.

PaaS is one of the fastest-growing cloud offerings today; Gartner forecasts the total market for
PaaS to exceed $34 billion by 2022 (link resides outside IBM), doubling its 2018 size.

Learn more about PaaS

Serverless Computing

Serverless computing (typically referred to simply as “serverless”) is a hyper-efficient PaaS,


differing from conventional PaaS in two important ways:

 Serverless offloads all responsibility for infrastructure management tasks (scaling, scheduling,
patching, provisioning)  to the cloud provider, allowing developers to focus all their time and
energy on code.
 Serverless runs code only on demand — that is, when requested by the application,  enabling the
cloud customer to pay for compute resources only when they code is running. With serverless,
you never pay for idle computing capacity.

To learn more about serverless — including which workloads are prime targets for serverless,
and which are not — read this post.

Learn more about serverless computing

SaaS – Software as a Service

SaaS is application software that runs in the cloud, and which customers use via internet
connection, usually in a web browser, typically for a monthly or annual fee. SaaS is still the most
widely used form of cloud computing. If you’ve used salesforce.com, Hubspot or Carbonite,
you’ve used SaaS.

SaaS lets you and your team start using software rapidly. Just sign up and get to work. It lets you
access your specific instance of the application and your data from any computer, and typically
from any mobile device. If your computer or mobile device breaks, you don’t lose your data
(because it’s all in the cloud). The software scales as needed. And the SaaS vendor applies fixes
and updates without any effort on your part.

Learn more about SaaS

Types of cloud computing


Generally speaking, there are four cloud computing models.

 Public cloud

 Private cloud

 Hybrid cloud

 Multicloud

 IBM Cloud

Public cloud
In a public cloud, a cloud provider offers affordable access to cloud services, running on some
portion of its privately-owned infrastructure, via the internet. Customers don’t need to purchase
any hardware, software or supporting infrastructure of their own; everything is owned and
managed by the cloud provider, and the customer effectively “rents” a portion of it for a
subscription or usage-based fee. Multiple customers — thousands and thousands — share the
resources of a public cloud.

Public cloud offers instant access to SaaS business applications, IaaS computing and storage
resources, and PaaS for application deployment and development. According to
Gartner, customers will spend $266.4 billion on public cloud services this year (link resides
outside IBM), up 17% from last year.

Private cloud
Private cloud is cloud infrastructure operated exclusively for one company; it’s managed by the
company or a third party (or both), and is hosted primarily on premises, but can also be hosted on
dedicated cloud-provider or third-party infrastructure. Private cloud enables a company to take
advantage of cloud efficiencies while providing greater control over resources, data security and
regulatory compliance, and avoiding the potential impact of sharing resources with another cloud
customer (called “shared tenancy”).

Learn more about private cloud

Hybrid cloud
Hybrid cloud integrates private and public clouds, using technologies and management tools that
allow workloads to move seamlessly between both as needed for optimal performance, security,
compliance and cost-effectiveness. For example, hybrid cloud enables a company to keep
sensitive data and mission-critical legacy applications (which can’t easily be migrated to the
cloud) on premises, while leveraging public cloud for SaaS applications, PaaS for rapid
deployment of new applications, and IaaS for additional storage or compute capacity on demand.
Learn more about hybrid cloud

Multicloud and hybrid multicloud


Multicloud refers to infrastructure comprising multiple vendors’ public clouds — the use of
services from two or more major cloud providers (e.g., IBM Cloud and Google), or services from
a major cloud provider and at least one SaaS software vendor.

Hybrid multicloud refers to the use of private cloud plus multicloud.

Increasingly, businesses are embracing hybrid multicloud as the deployment model that lets them
take maximum advantage of cloud agility and flexibility; meet their security and regulatory
compliance needs; and integrate their legacy applications and systems. According to industry
analyst 451 Research (Video, 02:03), at least 67% of businesses use two or more separate clouds,
and 55% or organizations have settled on a hybrid multicloud approach. IBM estimates hybrid
cloud to be a $1.2-trillion business opportunity for cloud and cloud service providers.

Learn more about multicloud

Cloud computing and IBM Cloud


IBM Cloud™ is a robust suite of advanced data and AI tools, and deep industry expertise to help
you on your journey to the cloud. Customers can choose among over 170 products and services
covering data, containers, AI and machine learning, IoT, blockchain and more. And they can
combine public cloud, public dedicated cloud, private cloud, hybrid cloud and multicloud
deployment models to match the right workload to the right cloud environment.

IBM Cloud maintains 60 data centers worldwide, enabling local deployment, global scalability,
and built-in resiliency and redundancy on six continents. 47 of the Fortune 50 companies trust
mission-critical applications to IBM Cloud’s enterprise-grade infrastructure. And IBM is the one
cloud provider that’s also a leading IT security organization, with over 8,000 security experts on
staff.

Learn how American Airlines became more responsive to customer needs with a new technology
platform from the IBM Cloud.

To get started with IBM Cloud, sign up for an IBMid and create your IBM Cloud account.

Security issues in cloud computing


Your cloud infrastructure is only as secure as you make it. Responsibility for securing the cloud
lies not only with security teams, but also with DevOps and operations teams that are charged
with ensuring appropriate security controls are used. Businesses are eager to bring more
regulated workloads to the cloud, including any application that manages or contains personal
identifying information, financial information or healthcare information.
To avoid cloud computing risks, a cloud managed services provider should incorporate built-in
security layers at every level — from the data center to the operating system — delivering a fully
configured solution with industry-leading physical security and regular vulnerability scans
performed by highly skilled specialists.

Learn more about cloud security

Cloud computing for the enterprise


Enterprises eager to undergo digital transformations and modernize their applications are quick
to see the value of adopting a cloud computing platform. They are increasingly finding business
agility or cost savings by renting software. Each cloud computing service and deployment model
type provides you with different levels of control, flexibility and management. Therefore, it’s
important to understand the differences between them.

Common convention points to public cloud as the delivery model of choice. But, when
considering the right architecture of cloud computing for your applications and workloads, you
must begin by addressing the unique needs of your business.

This can include many factors, such as government regulations, security, performance, data
residency, service levels, time to market, architecture complexity, skills and preventing vendor
lock-in. Add in the need to incorporate the emerging technologies, and you can see why IT
leaders are challenging the notion that cloud computing migration is easy.

At first glance, the types of cloud computing seem simple: public, private or a hybrid mix of
both. In reality, the choices are many. Public cloud can include shared, dedicated and bare metal
delivery models. Fully and partially managed clouds are also options. And, in some cases,
especially for existing applications where architectures are too complex to move or the cost-
benefit ratio is not optimal, cloud may not be the right choice.

The right model depends on your workload. You should understand the advantages and
disadvantages of each cloud deployment model and take a methodical approach to determining
which workloads to move to which type of cloud for the maximum benefit.

Dive deeper into specific cloud service and deployment models, cloud computing architecture
and cloud computing examples

Cloud computing storage


Storage growth continues at a significant rate, driven by new workloads like analytics, video and
mobile applications. While storage demand is increasing, most IT organizations are under
continued pressure to lower the cost of their IT infrastructure through the use of shared cloud
computing resources. It’s vital for software designers and solution architects to match the
specific requirements of their workloads to the appropriate storage solution or, in many
enterprise cases, a mix.
One of the biggest advantages of cloud storage is flexibility. A company that has your data or
data you want will be able to manage, analyze, add to and transfer it all from a single dashboard
— something impossible to do today on storage hardware that sits alone in a data center.

The other major benefit of storage software is that it can access and analyze any kind of data
wherever it lives, no matter the hardware, platform or format. So, from mobile devices linked to
your bank to servers full of unstructured social media information, data can be understood via the
cloud.

Learn more about cloud storage

Cloud computing pricing


Like any other business-critical decision, selecting a cloud service requires due diligence and
research beyond published per-unit rates. It requires an in-depth understanding of workload
performance characteristics and needs and the ability to match those needs to the actual offerings
of multiple cloud vendors.

Cloud service providers have no shared standard “units” for cloud capacity or common pricing
structures, nor are there common specifications for the underlying hardware that runs the cloud
applications. As a result, an assumption of total workload cost based on a provider’s basic “per
unit” rate can easily be off by orders of magnitude.

10 tips for enterprise cloud procurement


 Do your homework. Don’t assume that the provider that’s currently in the news with price
decreases will be the best-priced provider for your workload.
 Understand all workload requirements that will impact cloud workload costs and operations (not
just compute and storage). Consider costs associated with licensing software for each core, data
transfer to the internet or private network and persistent storage.
 Understand how the provider will support geographically-dispersed workloads. If your
application must move data throughout the globe, ensure that the provider not only has data
centers in the regions where you do business, but also a high-performance, private global
network. Also consider whether the provider charges data transfer fees between and among cloud
centers — any such fees can considerably add to costs if your company expands globally.
 Consider your business requirements, including the “agility” tradeoff. Can you afford to lock in
to a specific provider, unit type, volume or time frame, even if it means a discount?
 Consider the “net present value of money” when you evaluate long-term pricing options. Seek
input from your finance department, especially if you are considering an upfront payment option.
This will ensure your comparisons are valid and adhere to your company’s accounting rules for
net present value.
 Factor in the non-workload-specific costs your business will need to run the workload optimally,
including technical support, engineering and even professional services.
 Allow for changing workload needs. You should be able to move workloads as needed from, for
example, bare metal to virtualized servers without a major effort.
 Consider the big picture. Each cloud workload should fit into a holistic cloud strategy, one that
will likely comprise multiple deployment models, geographies and vendors.
 Even as you consider price performance based on the individual cloud workload, consider the
provider’s ability to support your broader hybrid IT strategy via OpenStack-compatible
platforms, integrated solutions and seamless migration across models.
 Finally, don’t just look at price for the workload. Instead, consider price-performance to be your
base unit of comparison as you consider cloud options or any type of IT option.

Use the cloud pricing calculator to get an idea of your workload costs

The future of cloud


Within the next three years, 75 percent of existing non-cloud apps will move to the cloud.
Today’s computing landscape shows companies not only adopting cloud but using more than one
cloud environment. Even then, the cloud journey for many has only just begun, moving beyond
low-end infrastructure as a service to establish higher business value.

Scale security while innovating microservices fast


CISOs are notoriously risk-averse and compliance-focused, providing policies for IT and App
Dev to enforce. In contrast, serving business outcomes, app dev leaders want to eliminate
DevOps friction wherever possible in continuous integration and development of applications
within a cloud native, microservices architecture.  What approach satisfies those conflicting
demands while accomplishing the end goal: scale security?

Establishing a chain of trust to scale security

As the foundation of information security, a hardware-rooted chain of trust verifies the integrity
of every relevant component in the cloud platform, giving you security automation that flexibly
integrates into the DevOps pipeline. A true chain of trust would start in the host chip
firmware and build up through the container engine and orchestration system, securing all critical
data and workloads during an application’s lifecycle.

Watch the video

Hardware is the ideal foundation because it is rooted in silicon, making it difficult for hackers to
alter.

The chain of trust would be built from this root using the measure-and-verify security model,
with each component measuring, verifying and launching the next level. This process would
extend to the container engine, creating a trust boundary, with measurements stored in a Trusted
Platform Module (TPM) on the host.   

So far, so good—but now you must extend this process beyond the host trust boundary to the
container orchestration level. You must continue to scale security.

Attestation software on a different server can verify current measurements against known good
values. The container orchestrator communicates with the attestation server to verify the integrity
of worker hosts, which in turn setup and manage the containers deployed on
them. All communication beyond the host trust boundary is encrypted, resulting in a highly
automated, trusted container system. 

How to scale security management for the enterprise

What do you get with a fully implemented chain of trust?  

 Enhanced transparency and scalability: Because a chain of trust facilitates automated


security, DevOps teams are free to work at unimpeded velocity. They only need to
manage the security policies against which the trusted container system evaluates its
measurements.  
 Geographical workload policy verification: Smart container orchestration limits
movement to approved locations only.  
 Container integrity assurance: When containers are moved, the attestor checks to
ensure that no tampering occurred during the process. The system verifies that the moved
container is v the same as the originally created container. 
 Security for sensitive data: Encrypted containers can only be decrypted on approved
servers, protecting data in transit from exposure and misuse.  
 Simplified compliance controls and reporting: A metadata audit trail provides
visibility and audit-able evidence that critical container workloads are running on trusted
servers. 

The chain of trust architecture is designed to meet the urgent need for both security and rapid
innovation. Security officers can formulate security policies that are automatically applied to
every container being created or moved. Beyond maintaining the policies themselves in a
manifest, each step in the sequence is automated, enabling DevOps teams to quickly build and
deploy applications without manually managing security. 

As your team evaluates cloud platforms, ask vendors to explain how they establish and
maintain trust in the technology that will host your organization’s applications. It helps to have
clear expectations going in.  

What is virtualization?
Virtualization is a process that allows for more efficient utilization of physical
computer hardware and is the foundation of cloud computing.

Virtualization uses software to create an abstraction layer over computer hardware


that allows the hardware elements of a single computer—processors, memory, storage
and more—to be divided into multiple virtual computers, commonly called virtual
machines (VMs). Each VM runs its own operating system (OS) and behaves like an
independent computer, even though it is running on just a portion of the actual
underlying computer hardware.

It follows that virtualization enables more efficient utilization of physical computer


hardware and allows a greater return on an organization’s hardware investment.

Today, virtualization is a standard practice in enterprise IT architecture. It is also the


technology that drives cloud computing economics. Virtualization enables cloud
providers to serve users with their existing physical computer hardware; it enables
cloud users to purchase only the computing resources they need when they need it,
and to scale those resources cost-effectively as their workloads grow.

For a further overview of how virtualization works, see our video “Virtualization
Explained”:

Featured products

IBM Cloud Virtual Servers

Benefits of virtualization
Virtualization brings several benefits to data center operators and service providers:

 Resource efficiency: Before virtualization, each application server required its


own dedicated physical CPU—IT staff would purchase and configure a
separate server for each application they wanted to run. (IT preferred one
application and one operating system (OS) per computer for reliability
reasons.) Invariably, each physical server would be underused. In contrast,
server virtualization lets you run several applications—each on its own VM
with its own OS—on a single physical computer (typically an x86 server)
without sacrificing reliability. This enables maximum utilization of the physical
hardware’s computing capacity.
 Easier management: Replacing physical computers with software-defined
VMs makes it easier to use and manage policies written in software. This
allows you to create automated IT service management workflows. For
example, automated deployment and configuration tools enable
administrators to define collections of virtual machines and applications as
services, in software templates. This means that they can install those services
repeatedly and consistently without cumbersome, time-consuming. and error-
prone manual setup. Admins can use virtualization security policies to
mandate certain security configurations based on the role of the virtual
machine. Policies can even increase resource efficiency by retiring unused
virtual machines to save on space and computing power.
 Minimal downtime: OS and application crashes can cause downtime and
disrupt user productivity. Admins can run multiple redundant virtual machines
alongside each other and failover between them when problems arise.
Running multiple redundant physical servers is more expensive.
 Faster provisioning: Buying, installing, and configuring hardware for each
application is time-consuming. Provided that the hardware is already in place,
provisioning virtual machines to run all your applications is significantly faster.
You can even automate it using management software and build it into
existing workflows.

For a more in-depth look at the potential benefits, see "5 Benefits of Virtualization."

Solutions
Several companies offer virtualization solutions covering specific data center tasks or
end user-focused, desktop virtualization scenarios. Better-known examples include
VMware, which specializes in server, desktop, network, and storage virtualization;
Citrix, which has a niche in application virtualization but also offers server
virtualization and virtual desktop solutions; and Microsoft, whose Hyper-V
virtualization solution ships with Windows and focuses on virtual versions of server
and desktop computers.

Virtual machines (VMs)


Virtual machines (VMs) are virtual environments that simulate a physical compute in
software form. They normally comprise several files containing the VM’s
configuration, the storage for the virtual hard drive, and some snapshots of the VM
that preserve its state at a particular point in time.

For a complete overview of VMs, see "What is a Virtual Machine?"


Hypervisors
A hypervisor is the software layer that coordinates VMs. It serves as an interface
between the VM and the underlying physical hardware, ensuring that each has access
to the physical resources it needs to execute. It also ensures that the VMs don’t
interfere with each other by impinging on each other’s memory space or compute
cycles.

There are two types of hypervisors:

 Type 1 or “bare-metal” hypervisors interact with the underlying physical


resources, replacing the traditional operating system altogether. They most
commonly appear in virtual server scenarios.
 Type 2 hypervisors run as an application on an existing OS. Most commonly
used on endpoint devices to run alternative operating systems, they carry a
performance overhead because they must use the host OS to access and
coordinate the underlying hardware resources.

“Hypervisors: A Complete Guide” provides a comprehensive overview of everything


about hypervisors.

Types of virtualization
To this point we’ve discussed server virtualization, but many other IT infrastructure
elements can be virtualized to deliver significant advantages to IT managers (in
particular) and the enterprise as a whole. In this section, we'll cover the following
types of virtualization:

 Desktop virtualization
 Network virtualization
 Storage virtualization
 Data virtualization
 Application virtualization
 Data center virtualization
 CPU virtualization
 GPU virtualization
 Linux virtualization
 Cloud virtualization
Desktop virtualization

Desktop virtualization lets you run multiple desktop operating systems, each in its
own VM on the same computer.

There are two types of desktop virtualization:

 Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a


central server and streams them to users who log in on thin client devices. In
this way, VDI lets an organization provide its users access to variety of OS's
from any device, without installing OS's on any device. See "What is Virtual
Desktop Infrastructure (VDI)?" for a more in-depth explanation.
 Local desktop virtualization runs a hypervisor on a local computer, enabling
the user to run one or more additional OSs on that computer and switch from
one OS to another as needed without changing anything about the primary
OS.

For more information on virtual desktops, see “Desktop-as-a-Service (DaaS).”

Network virtualization

Network virtualization uses software to create a “view” of the network that an


administrator can use to manage the network from a single console. It abstracts
hardware elements and functions (e.g., connections, switches, routers, etc.) and
abstracts them into software running on a hypervisor. The network administrator can
modify and control these elements without touching the underlying physical
components, which dramatically simplifies network management.

Types of network virtualization include software-defined networking (SDN), which


virtualizes hardware that controls network traffic routing (called the “control plane”),
and network function virtualization (NFV), which virtualizes one or more hardware
appliances that provide a specific network function (e.g., a firewall, load balancer, or
traffic analyzer), making those appliances easier to configure, provision, and manage.

Storage virtualization

Storage virtualization enables all the storage devices on the network— whether


they’re installed on individual servers or standalone storage units—to be accessed and
managed as a single storage device. Specifically, storage virtualization masses all
blocks of storage into a single shared pool from which they can be assigned to any
VM on the network as needed. Storage virtualization makes it easier to provision
storage for VMs and makes maximum use of all available storage on the network.
For a closer look at storage virtualization, check out "What is Cloud Storage?"

Data virtualization

Modern enterprises store data from multiple applications, using multiple file formats,
in multiple locations, ranging from the cloud to on-premise hardware and software
systems. Data virtualization lets any application access all of that data—irrespective
of source, format, or location.

Data virtualization tools create a software layer between the applications accessing the
data and the systems storing it. The layer translates an application’s data request or
query as needed and returns results that can span multiple systems. Data virtualization
can help break down data silos when other types of integration aren’t feasible,
desirable, or affordable.

Application virtualization

Application virtualization runs application software without installing it directly on


the user’s OS. This differs from complete desktop virtualization (mentioned above)
because only the application runs in a virtual environment—the OS on the end user’s
device runs as usual. There are three types of application virtualization: 

 Local application virtualization: The entire application runs on the endpoint


device but runs in a runtime environment instead of on the native hardware.
 Application streaming: The application lives on a server which sends small
components of the software to run on the end user's device when needed.
 Server-based application virtualization The application runs entirely on a
server that sends only its user interface to the client device.

Data center virtualization

Data center virtualization abstracts most of a data center’s hardware into software,
effectively enabling an administrator to divide a single physical data center into
multiple virtual data centers for different clients.

Each client can access its own infrastructure as a service (IaaS), which would run on
the same underlying physical hardware. Virtual data centers offer an easy on-ramp
into cloud-based computing, letting a company quickly set up a complete data center
environment without purchasing infrastructure hardware.
CPU virtualization

CPU (central processing unit) virtualization is the fundamental technology that makes
hypervisors, virtual machines, and operating systems possible. It allows a single CPU
to be divided into multiple virtual CPUs for use by multiple VMs.

At first, CPU virtualization was entirely software-defined, but many of today’s


processors include extended instruction sets that support CPU virtualization, which
improves VM performance.

GPU virtualization

A GPU (graphical processing unit) is a special multi-core processor that improves


overall computing performance by taking over heavy-duty graphic or mathematical
processing. GPU virtualization lets multiple VMs use all or some of a single GPU’s
processing power for faster video, artificial intelligence (AI), and other graphic- or
math-intensive applications.

 Pass-through GPUs make the entire GPU available to a single guest OS.


 Shared vGPUs divide physical GPU cores among several virtual GPUs (vGPUs)
for use by server-based VMs.

Linux virtualization

Linux includes its own hypervisor, called the kernel-based virtual machine (KVM),
which supports Intel and AMD’s virtualization processor extensions so you can create
x86-based VMs from within a Linux host OS.

As an open source OS, Linux is highly customizable. You can create VMs running
versions of Linux tailored for specific workloads or security-hardened versions for
more sensitive applications.

Cloud virtualization

As noted above, the cloud computing model depends on virtualization. By virtualizing


servers, storage, and other physical data center resources, cloud computing providers
can offer a range of services to customers, including the following: 

 Infrastructure as a service (IaaS): Virtualized server, storage, and network


resources you can configure based on their requirements.  
 Platform as a service (PaaS): Virtualized development tools, databases, and
other cloud-based services you can use to build you own cloud-based
applications and solutions.
 Software as a service (SaaS): Software applications you use on the cloud. SaaS
is the cloud-based service most abstracted from the hardware.

If you’d like to learn more about these cloud service models, see our guide: “IaaS vs.
PaaS vs. SaaS.”

Virtualization vs. containerization


Server virtualization reproduces an entire computer in hardware, which then runs an
entire OS. The OS runs one application. That’s more efficient than no virtualization at
all, but it still duplicates unnecessary code and services for each application you want
to run.

Containers take an alternative approach. They share an underlying OS kernel, only


running the application and the things it depends on, like software libraries and
environment variables. This makes containers smaller and faster to deploy.

For a deep dive into containers and containerization, check out “Containers: A
Complete Guide” and “Containerization: A Complete Guide.”

Check out the blog post "Containers vs. VMs: What's the difference?" for a closer
comparision.

In the following video, Sai Vennam breaks down the basics of containerization and
how it compares to virtualization via VMs:

VMware
VMware creates virtualization software. VMware began by offering server
virtualization only—its ESX (now ESXi) hypervisor was one of the earliest
commercially successful virtualization products. Today VMware also offers solutions
for network, storage, and desktop virtualization.

For a deep dive on everything involving VMware, see “VMware: A Complete Guide.”
 

Security
Virtualization offers some security benefits. For example, VMs infected with malware
can be rolled back to a point in time (called a snapshot) when the VM was uninfected
and stable; they can also be more easily deleted and recreated. You can’t always
disinfect a non-virtualized OS, because malware is often deeply integrated into the
core components of the OS, persisting beyond system rollbacks.

Virtualization also presents some security challenges. If an attacker compromises a


hypervisor, they potentially own all the VMs and guest operating systems. Because
hypervisors can also allow VMs to communicate between themselves without
touching the physical network, it can be difficult to see their traffic, and therefore to
detect suspicious activity.

A Type 2 hypervisor on a host OS is also susceptible to host OS compromise.

The market offers a range of virtualization security products that can scan and patch
VMs for malware, encrypt entire VM virtual disks, and control and audit VM access.

Virtualization and IBM


IBM Cloud offers a full complement of cloud-based virtualization solutions, spanning
public cloud services through to private and hybrid cloud offerings. You can use it to
create and run virtual infrastructure and also take advantage of services ranging from
cloud-based AI to VMware workload migration with IBM Cloud for VMware
Solutions.

Sign up today for an IBM Cloud account.

What is an open hybrid cloud strategy?


An open hybrid approach allows you to integrate the best features and functions from any cloud,
or traditional IT environment, and tap the unmatched pace and quality of innovations from the
open source community.
PaaS (Platform-as-a-Service)
PaaS, or Platform-as-a-Service, provides a complete, flexible and cost-effective cloud platform
for developing, running and managing applications.

Modernize your applications for interoperability and ROI Enhance the value of your existing
apps and reduce the cost to maintain them.

Learn more

What is PaaS (Platform-as-a-Service)?


PaaS, or Platform-as-a-Service, is a cloud computing model that provides customers a
complete cloud platform—hardware, software, and infrastructure—for developing, running, and
managing applications without the cost, complexity, and inflexibility that often comes with
building and maintaining that platform on-premises.

The PaaS provider hosts everything—servers, networks, storage, operating system software,


databases, development tools—at their data center. Typically customers can pay a fixed fee to
provide a specified amount of resources for a specified number of users, or they can choose 'pay-
as-you-go' pricing to pay only for the resources they use. Either option enables PaaS customers
to build, test, deploy run, update and scale applications more quickly and inexpensively they
could if they had to build out and manage their own on-premises platform.

Every leading cloud service provider—including Amazon Web Services (AWS), Google


Cloud, IBM Cloud and Microsoft Azure—has its own PaaS offering. Popular PaaS solutions are
also available as open source projects (e.g. Apache Stratos, Cloud Foundry) or from software
ventors (e.g. Red Hat OpenShift and Salesforce Heroku).

Microservices in the enterprise, 2021 New IBM research reveals the benefits and challenges of
microservices adoption.

Download the ebook

Benefits of PaaS
The most commonly-cited benefits of PaaS, compared to an on-premises platform, include:

 Faster time to market. With PaaS, there’s no need to purchase and install the hardware
and software you use to build and maintain your application development platform—and
no need for development teams to wait while you do this. You simply tap into the cloud
service provider’s PaaS to begin provisioning resources and developing immediately.
 Affordable access to a wider variety of resources. PaaS platforms typically offer
access to a wider range of choices up and down the application stack—
including operating systems, middleware, databases and development tools—than most
organizations can practically or affordably maintain themselves. 
 More freedom to experiment, with less risk. PaaS also lets you try or test
new operating systems, languages and other tools without having to make substantial
investments in them, or in the infrastructure required to run them.
 Easy, cost-effective scalability. With an on-premises platform, scaling is always
expensive, often wasteful and sometimes inadequate: You have to purchase
additional compute, storage and networking capacity in anticipation of traffic spikes;
much of that capacity sits idle during low-traffic periods, and none of it can be increased
in time to accommodate unanticipated surges. With PaaS, you can purchase additional
capacity, and start using it immediately, whenever you need it.
 Greater flexibility for development teams. PaaS services provide a
shared software development environment that allows development and operations teams
access to all the tools they need, from any location with an internet connection.
 Lower costs overall. Clearly PaaS reduces costs by enabling an organization to avoid
capital equipment expense associated with building and scaling an application platform.
But PaaS also can also reduce or eliminate software licensing costs. And by handling
patches, updates and other administrative tasks, PaaS can reduce your overall application
management costs. 

Featured products

Red Hat OpenShift on IBM Cloud

IBM Cloud Pak for Applications

How PaaS works


In general, PaaS solutions have three main parts:

 Cloud infrastructure including virtual machines (VMs), operating system software,


storage, networking, firewalls
 Software for building, deploying and managing applications
 A graphic user interface, or GUI, where development or DevOps teams can do all their
work throughout the entire application lifecycle

Because PaaS delivers all standard development tools through the GUI online interface,


developers can log in from anywhere to collaborate on projects, test new applications, or roll out
completed products. Applications are designed and developed right in
the PaaS using middleware. With streamlined workflows, multiple development and operations
teams can work on the same project simultaneously.

PaaS providers manage the bulk of your cloud computing services, such as


servers, runtime and virtualization. As a PaaS customer, your company maintains management
of applications and data.

Related links

IBM Cloud Paks

PaaS, IaaS, and SaaS


Like PaaS, Infrastructure-as-a-Service (IaaS) and Software-as-a-Service (SaaS) are very
common cloud computing service models. In fact it's very common for an organization to use all
three—even if they don't purchase all three specifically. To clarify:

IaaS is internet access to 'raw' IT infrastructure—physical servers, virtual machines, storage,


networking, firewalls—hosted by a cloud provider. IaaS eliminates cost and work of owning,
managing and maintaining on-premises infrastructure. With IaaS the organization provides its
own application platform and applications. 

Any PaaS offering necessarily includes the IaaS resources required to host it, even if those


resources aren't discretely broken out or referred to as IaaS.

SaaS is application software you use via the cloud, as if it were installed on your computer (in
some cases, parts of it are installed on your computer). SaaS enables your organization to use an
application without the expense of setting up the infrastructure to run it, and the effort and
personnel to maintain it (apply bug fixes and updates, address outages, etc.) Salesforce and Slack
are examples of popular SaaS offerings; most web applications are considered SaaS.

Every SaaS offering includes the IaaS resources required to host it and, at minimum,


the PaaS components required run it. Some SaaS vendors also provide a discrete PaaS that
allows third parties to customize the SaaS offering.)

Another way to compare IaaS, PaaS and SaaS is based on the amount of management that's left


to the customer vs. the amount of management left to the cloud service provider:

Read more about IaaS, PaaS and SaaS.

Use cases for PaaS


By providing an integrated and ready-to-use platform—and by enabling organizations to offload
infrastructure management to the cloud provider and focus on building, deploying and managing
applications—PaaS can ease or advance a number of IT initiatives, including:

 API development and management: Because of its built-in frameworks, PaaS makes it


much simpler for teams to develop, run, manage and secure APIs (application
programming interfaces) for sharing data and functionality between applications.
 Internet of Things (IoT): Out of the box, PaaS can support a range of programming
languages (Java, Python, Swift, etc.), tools and application environments used
for IoT application development and real-time processing of data generated
by IoT devices.
 Agile development and DevOps: PaaS can provide fully-configured environments for
automating the software application lifecycle including integration, delivery, security,
testing and deployment.
 Cloud migration and cloud-native development: With its ready-to-use tools and
integration capabilities, PaaS can simplify migration of existing applications to the cloud
—particularly via replatforming (moving an application to the cloud with modifications
that take better advantage of cloud scalability, load balancing and other capabilities)
or refactoring (re-architecting some or all of an application
using microservices, containers and other cloud-native technologies).
 Hybrid cloud strategy: Hybrid cloud integrates public cloud services, private
cloud services and on-premises infrastructure and provides orchestration, management
and application portability across all three. The result is a unified and flexible distributed
computing environment, where an organization can run and scale its traditional (legacy)
or cloud-native workloads on the most appropriate computing model. The right PaaS
solution allows developers to build once, then deploy and mange anywhere in
a hybrid cloud environment.

Purpose-built PaaS types


Many cloud, software and hardware vendors offer PaaS solutions for building specific types of
applications, or applications that interacting with specific types of hardware, software or devices.

 AIPaaS (PaaS for Artificial Intelligence) lets development teams build artificial


intelligence (AI) applications without the often prohibitive expense of purchasing,
managing and maintaining the significant computing power, storage capabilities and
networking capacity these applications require. AiPaaS typically includes pre-
trained machine learning and deep learning models developers can use as-is or customize,
and ready-made APIs for integrating specific AI capabilities, such
as speech recognition or speech-to-text conversion, into existing or new applications.
 iPaaS (integration platform as a service) is a cloud-hosted solution for integrating
applications. iPaaS provides organizations a standardized way to connect data, processes,
and services across public cloud, private cloud and on-premises environments without
having to purchase, install and manage their own backend integration
hardware, middleware and software. (Note that Paas solutions often include some degree
of integration capability—API management, for example—but iPaaS is more
comprehensive.)
 cPaaS (communications platform as a service) is a PaaS that lets developers easily add
voice (inbound and outbound calls), video (including teleconferencing) and messaging
(text and social media) capabilities to applications, without investing in specialized
communications hardware and software. 
 mPaaS (mobile platform as a service) is a PaaS that simplifies application
development for mobile devices. mPaaS typically provides low-code (even simple drag-
and-drop) methods for accessing device-specific features including the phone's camera,
microphone, motion sensor and geolocation (or GPS) capabilities.

PaaS and IBM Cloud


IBM provides rich and scalable PaaS solutions for developing cloud native applications from
scratch, or modernizing existing applications to benefit from the flexibility and scalability of the
cloud.

IBM Red Hat OpenShift on IBM Cloud is a fully managed OpenShift service that uses the


enterprise scale and security of IBM Cloud to automate updates, scaling and provisioning, and to
handle unexpected surges in traffic. Your teams can jump-start development and app
modernization with a range of tools and features, and deploy highly available fully-managed
clusters with a single click. IBM Red Hat OpenShift on IBM Cloud was named the leader in The
Forrester Wave: Multicloud Container Development Platforms, Q3 2020 (PDF, 415 KB).

IBM Cloud Pak for Applications helps you modernize existing applications, embed additional
security, and develop new apps that unleash digital initiatives. It offers cloud-native development
solutions that can quickly deliver value, along with flexible licensing that can be tailored to your
specific needs.

What is cloud native?


Explore cloud native applications and how they drive innovation and speed within your
enterprise.

Cloud native refers less to where an application resides and more to how it is built and deployed.

 A cloud native application consists of discrete, reusable components known as microservices


that are designed to integrate into any cloud environment.
 These microservices act as building blocks and are often packaged in containers.
 Microservices work together as a whole to comprise an application, yet each can be
independently scaled, continuously improved, and quickly iterated through automation and
orchestration processes.
 The flexibility of each microservice adds to the agility and continuous improvement of cloud-
native applications.

In the video, "What is Cloud Native?", Andrea Crawford gives an overview of some of the key
concepts:

Microservices in the enterprise, 2021 - New IBM research reveals the benefits and challenges of
microservices adoption.

Download the ebook

Microservices and containers


Microservices (also called microservices architecture) is an architectural approach in which a
single application is composed of many smaller, loosely coupled and independently deployable
components or services. These services (also called microservices) typically have their own
technology stack, inclusive of database and data model, and communicate with each other via a
combination of REST APIs, event streaming, and message brokers.

Because microservices can be deployed and redeployed independently, without impacting each
other or disrupting the end-user experience, they are a perfect match for automated, iterative
delivery methodologies such as continuous integration/continuous deployment (CI/CD)
or DevOps. 

In addition to being used to create net-new cloud native applications, microservices can be used
to modernize traditional monolithic applications.

In a recent IBM survey of over IT executives, developer executives and developers, 87% of
microservices users agreed that microservices adoption is worth the expense and effort.

Developers often deploy microservices inside containers - lightweight, executable application


components that combine application source code - in this case, the microservices code - with all
the operating system (OS) libraries and dependencies required to run the code in any
environment. Smaller, more resource-efficient and more portable than virtual machines (VMs),
containers are the de facto compute units of modern cloud native applications.

Containers amplify the benefits of microservices by enabling a consistent deployment and


management experience across a hybrid multicloud environment - public clouds, private
cloud and on-premises infrastructure. But as cloud native applications multiply, so do containers
and the complexity of managing them. Most organizations using containerized microservices
also use a container orchestration platform, such as Kubernetes, to automate container
deployment and management at scale.
For more information on containers and containerization, see "Containers: A Complete Guide"
and "Containerization: A Complete Guide."

Learn more about why you should use microservices and containers as an architectural construct.

Modernize your applications for interoperability and ROI - Enhance the value of your existing apps and
reduce the cost to maintain them.

Learn more

Advantages and disadvantages


IBM customers find themselves increasingly tasked with improving existing applications,
building new applications, and enhancing user experience. Cloud native applications meet these
demands by improving app performance, flexibility, and extensibility.

Advantages

 Compared to traditional monolithic apps, cloud native applications can be easier to manage as
iterative improvements occur using Agile and DevOps processes.
 Comprised of individual microservices, cloud native applications can be improved incrementally
and automatically to continuously add new and improved application features.
 Improvements can be made non-intrusively, causing no downtime or disruption of the end-user
experience.
 Scaling up or down proves easier with the elastic infrastructure that underpins cloud native
apps.
 The cloud native development process more closely matches the speed and innovation
demanded by today’s business environment.

Disadvantages

 Although microservices enable an iterative approach to application improvement, they also


create the necessity of managing more elements. Rather than one large application, it becomes
necessary to manage far more small, discrete services.
 Cloud native apps demand additional toolsets to manage the DevOps pipeline, replace
traditional monitoring structures, and control microservices architecture.
 Cloud native applications allow for rapid development and deployment, but they also demand a
business culture that can cope with the pace of that innovation.

Featured products

Red Hat OpenShift on IBM Cloud

IBM Cloud Code Engine


Application examples
Cloud native applications often have quite specific functions. Consider how cloud native
applications might be used on a travel website. Each topic covered by the site—flights, hotels,
cars, specials—is its own microservice. Each microservice may roll out new features
independent of the other microservices. Specials and discounts can also scale out independently.
While the travel site is presented to customers as a whole, each microservice remains
independent and can be scaled or updated as needed without affecting other services. The
following are a few examples of other cloud native applications.

IBM Cloud Garage provides IBM customers consulting expertise to build scalable, innovative
cloud native apps fast. It offers an innovation hub where businesses of all sizes can design and
build apps that solve real-world business needs.

American Airlines (2:50) partnered with IBM to build a Dynamic Rebooking app that launched
during a severe weather pattern. The app improved the customer experience by providing users
more information and an improved rebooking process.

XComP Analytics (1:56), an analytics platform for education and training, needed to solve an
analytics problem, but in the process of correcting one issue, the company was able to develop
six new products after engaging with IBM Cloud Garage. The solution included the use of
microservices architecture and plugging in IBM Watson to solve specific analytics issues.

UBank (2:45) had a business need to improve their home-loan offering and help customers
complete the home-loan process. The company's smart assistant app, RoboChat, answered that
need and was built using the IBM DevOps toolchain. Customers that used RoboChat had a 15
percent higher home-loan completion rate.

A critical point of medical research is to advise doctors on best practices for patient care.
However, medical research that reveals best practices takes 17 years to work its way into actual
medical practice. ThinkResearch (2:06) uses IBM Cloud to deliver the best medical information
at the point of care. By using IBM Cloud infrastructure and managed Kubernetes services, the
ThinkResearch DevOps team can focus on innovation and patient care rather than infrastructure.

Development principles
Whether creating a new cloud native application or modernizing an existing application,
developers adhere to a consistent set of principles:

 Follow the microservices architectural approach: Break applications down to the single-


function services known as microservices. Microservices are loosely coupled but remain
independent, allowing the incremental, automated, and continuous improvement of an
application without causing downtime.
 Rely on containers for maximum flexibility and scalability: Containers package software with all
its code and dependencies in one place, allowing the software to run anywhere. This allows
maximum flexibility and portability in a multicloud environment. Containers also allow fast
scaling up or down with Kubernetes orchestration policies defined by the user.
 Adopt Agile methods: Agile methods speed the creation and improvement process. Developers
can quickly iterate updates based on user feedback, allowing the working application version to
match as closely as possible to end-user expectations.

Storage
Cloud native applications frequently rely on containers. The appeal of containers is that they are
flexible, lightweight, and portable. Early use of containers tended to focus on stateless
applications that had no need to save user data from one user session to the next.

However, as more core business functions move to the cloud, the issue of persistent storage must
be addressed in a cloud native environment. This requires developers to consider new ways to
approach cloud storage.

Just as cloud native application development takes on a microservices and modular approach, so
must cloud native storage. Cloud native data can reside in any number of places— such as event
or system logs, relational databases, and document or object stores.

Data location, retention demands, portability, platform compatibility, and security are only a few
of the aspects that developers must consider when planning for cloud native storage.

Discover how IBM Cloud Object Storage creates a persistent data store for cloud native
applications.

Cloud native vs. traditional applications


Cloud native vs. Cloud enabled

A cloud enabled application is an application that was developed for deployment in a traditional
data center but was later changed so that it also could run in a cloud environment. Cloud native
applications, however, are built to operate only in the cloud. Developers design cloud native
applications to be scalable, platform agnostic, and comprised of microservices.
Cloud native vs. Cloud ready

In the short history of cloud computing, the meaning of "cloud ready" has shifted several times.
Initially, the term applied to services or software designed to work over the internet. Today, the
term is used more often to describe an application that works in a cloud environment or a
traditional app that has been reconfigured for a cloud environment. The term "cloud native" has a
much shorter history and refers to an application developed from the outset to work only in the
cloud and take advantage of the characteristics of cloud architecture or an existing app that has
been refactored and reconfigured with cloud native principles.

Cloud native vs. Cloud based

A cloud based service or application is delivered over the internet. It’s a general term applied
liberally to any number of cloud offerings. Cloud native is a more specific term. Cloud native
describes applications designed to work in cloud environments. The term denotes applications
that rely on microservices, continuous integration and continuous delivery (CI/CD) and can be
used via any cloud platform.

Cloud native vs. Cloud first

Cloud first describes a business strategy in which organizations commit to using cloud resources
first when launching new IT services, refreshing existing services, or replacing legacy
technology. Cost savings and operational efficiencies drive this strategy. Cloud native
applications pair well with a cloud-first strategy because they use only cloud resources and are
designed to take advantage of the beneficial characteristics of cloud architecture.

Cloud native and IBM


Meeting more-demanding user expectations means adopting the right architectures, practices,
and technologies. As you look to enhance user experience by building new applications
and modernize existing applications on your journey to cloud, cloud native can help
by improving app performance, flexibility, and extensibility.

Take the next step:

 See how you can start to move forward by using IBM for cloud native.
 Learn how IBM can help lead the way with cloud native professional services.
 Try your hand at modernizing an existing app for cloud native deployment with this tutorial.
 Explore the Cloud Native and Multicloud course and badge contained within the IBM Cloud
Associate Solution Advisor role-based certification.
 Build skills through modern integration, security, and identity courses, such as “Deploying Cloud-
Native Architectures and Applications” and “Preparing for Cloud-Native Security” contained
within the IBM Cloud Professional Developer role-based training and certification.
Get started with an IBM Cloud account today.

Docker Swarm vs. Kubernetes: Which of these container


orchestration tools is right for you?
When considering the debate of Docker Swarm vs. Kubernetes, it might seem like a foregone
conclusion to many that Kubernetes is the right choice for workload orchestration. Let’s take a
moment, however, to explore the similarities and differences between Docker
Swarm and Kubernetes — the two preeminent container orchestrators — and see how they fit
into the cloud deployment and management world.

What are containers?


In a nutshell, containers are a standard way to package apps and all their dependencies so that
you can seamlessly move the apps between runtime environments. By packaging an app’s code,
dependencies, and configurations into one easy-to-use building block, containers let you take
important steps toward shortening deployment time and improving application reliability. Of
course, to be able to use your containers most effectively, you'll need to
orchestrate your containerized applications, which is where Kubernetes and Docker Swarm come
in.

What is Kubernetes?
Kubernetes is developed by the community with the intent of addressing
container scalability and management needs. In the early days of Kubernetes, the community
contributors leveraged their knowledge of creating and running internal tools, such as Borg and
Omega. With the advent of the Cloud Native Computing Foundation (CNCF) in partnership with
the Linux Foundation, the community adopted Open Governance for Kubernetes. IBM, as a
founding member of CNCF, actively contributes to CNCF’s cloud-native projects, along with
other companies like Google, Red Hat, Microsoft, and Amazon.

Kubernetes is an open source container-management tool for those important containers and their


complex production workloads. With Kubernetes, developers and DevOps teams can schedule,
deploy, manage, and discover highly available apps by using the flexibility of clusters.
A Kubernetes cluster is made up of compute hosts that are called worker nodes. These worker
nodes are managed by a Kubernetes master that controls and monitors all resources in the cluster.
A node can be a virtual machine or  physical, bare metal machine.

Pros of Kubernetes

 Open-source community that is very active in developing the code base


 Fast-growing KubeCon conferences throughout the year that are more than doubling attendance
numbers
 Battle-tested by big players like Google and our own IBM workloads and runs on most operating
systems
 Largest adoption in the market
 Available on the public cloud or for on-premises — managed or non-managed offerings from all
the big cloud providers (IBM Cloud, AWS, Microsoft Azure, Google Cloud Platform, etc.)
 Broad Kubernetes support from an ecosystem of cloud tool vendors, such as Sysdig, LogDNA,
and Portworx (among many others)
 Key functionalities include service discovery, ingress and load balancing, self-healing, storage
orchestration, horizontal scalability, automated rollouts and rollbacks, and batch execution
 Unified set of APIs and strong guarantees about the cluster state
Cons of Kubernetes

 Management of the Kubernetes master takes specialized knowledge


 Updates from open source community are frequent and require careful patching in order to avoid
disrupting workloads
 Too heavyweight for individual developers to set up for simplistic apps and infrequent
deployments
 Often need additional tools (e.g., kubectl CLI), services, continuous integration/continuous
deployment (CI/CD) workflows and other DevOps practices to fully manage access, identity,
governance and security

What is Docker Swarm?
Docker Swarm is another open-source container orchestration platform that has been around for
a while. Swarm —or more accurately, swarm mode — is Docker’s native support for
orchestrating clusters of Docker engines. A Swarm cluster consists of Docker
Engine deployed Swarm manager nodes (which orchestrate and manage the cluster) and worker
nodes (which are directed to execute tasks by the manager nodes).

Pros of Docker Swarm

 Built for use with the Docker Engine (Docker is a container platform used for building and
deploying containerized applications)
 Has its own Swarm API
 Smoothly integrates with Docker tools like Docker Compose and Docker CLI (uses the
same command line interface (CLI) as Docker Engine)
 Tools, services, and software that run with Docker containers will also work well with Swarm
 Is easy to install and set up for Docker environments
 Uses a filtering and scheduling system to provide intelligent node selection, allowing you to pick
the optimal nodes in a cluster for container deployment
Cons of Docker Swarm

 Limited customizations and extensions


 Less functionality-rich than Kubernetes
 No easy way to separate Dev-Test-Prod workloads in DevOps pipeline

Not to confuse matters too much, but Docker Enterprise Edition now supports Kubernetes too.
Docker Swarm vs. Kubernetes: A simple head-to-head
comparison
Installation and setup

 Kubernetes: No installation required for managed offerings from cloud providers.


 Swarm: Install it with Docker.
Scalability

 Kubernetes: Built-in with horizontal auto-scaling.


 Swarm: Auto-scaling groups.
Load balancing

 Kubernetes: Discovery of services through a single DNS name. Access to container applications


through IP address or HTTP route.
 Swarm: Internal load balancers.
High availability

 Kubernetes: Self-healing and intelligent scheduling. High availability of services


through replication.
 Swarm: Use Swarm Managers for availability controls.

Which container orchestration tool is right for you?


Docker Swarm is deployed with the Docker Engine, and is, therefore, readily available in your
environment. As a result, Swarm is easier to start with, and it may be more ideal for smaller
workloads. 

Kubernetes is now supported by every major cloud provider and do-it-yourself offerings like


Docker Enterprise Edition, highlighting the widespread popularity of this orchestration
tool. Kubernetes is more powerful, customizable, and flexible, which comes at the cost of a
steeper initial learning curve. Running Kubernetes through a managed service simplifies open-
source management responsibilities, which allows you to focus on building your applications.

Now that you’ve covered the differences between Kubernetes and Docker Swarm, take a deeper
dive in the IBM Cloud Kubernetes Service and learn how to build a scalable web application
on Kubernetes.

What is Istio?
Learn more about Istio—open technology that provides a way for developers to
seamlessly connect, manage, and secure networks of different microservices.
Istio is a configurable, open source service-mesh layer that connects, monitors, and
secures the containers in a Kubernetes cluster. At this writing, Istio works natively
with Kubernetes only, but its open source nature makes it possible for anyone to write
extensions enabling Istio to run on any cluster software. Today, we'll focus on using
Istio with Kubernetes, its most popular use case.

Kubernetes is a container orchestration tool, and one core unit of Kubernetes is a


node. A node consists of one or more containers, along with file systems or other
components. A microservices architecture might have a dozen different nodes, each
representing different microservices. Kubernetes manages availability and resource
consumption of nodes, adding pods as demand increases with the pod autoscaler. Istio
injects additional containers into the pod to add security, management, and
monitoring.

Because it is open source, Istio can run on any public cloud provider that supports it
and any private cloud with willing administrators.

The following video explains more about the basics of Istio:

Containers in the enterprise - New IBM research documents the surging momentum of
container and Kubernetes adoption.

Read the e-book (1.4 MB)

The network service mesh


When organizations move to microservices, they need to support dozens or hundreds
of specific applications. Managing those endpoints separately means supporting a
large number of virtual machines or VMs, including demand. Cluster software like
Kubernetes can create pods and scale them up, but Kubernetes does not provide
routing, traffic rules, or strong monitoring or debugging tools.

Enter the service mesh.

As the number of services increases, the number of potential ways to communicate


increases exponentially. Two services have only two communication paths. Three
services have six, while 10 services have 90. A service mesh provides a single way to
configure those communications paths by creating a policy for the communication.
A service mesh instruments the services and directs communications traffic according
to a predefined configuration. That means that instead of configuring a running
container (or writing code to do so), an administrator can provide configuration to the
service mesh and have it complete that work. This previously always had to happen
with web servers and service-to-service communication.

The most common way to do this in a cluster is to use the sidecar pattern. A sidecar is
a new container, inside the pod, that routes and observes communications traffic
between services and containers.

Flexible, resilient, secure IT for your Hybrid Cloud - Containers, Kubernetes and Istio
are part of an open hybrid cloud strategy lets you build and manage workloads from
anywhere.

Learn more

Istio and Kubernetes


As mentioned earlier, Istio layers on top of Kubernetes, adding containers that are
essentially invisible to the programmer and administrator. Called "sidecar" containers,
these act as a "person in the middle," directing traffic and monitoring the interactions
between components. The two work in combination in three ways: configuration,
monitoring, and management.

Configuration

The primary method to set configuration with Kubernetes is the kubectl command,
commonly "kubectl -f <filename>", where the file is a YAML file. Istio users can
either run new and different types of YAML files with kubectl or use the new,
optional, ioctl command.

Monitoring

With Istio, you can easily monitor the health of your applications running with
Kubernetes. Istio's instrumentation can manage and visualize the health of
applications, providing more insight than just the general monitoring of cluster and
nodes that Kubernetes provides.

Management
Because the interface for Istio is essentially the same as Kubernetes, managing it takes
almost no additional work. In fact, Istio allows the user to create policies that impact
and manage the entire Kubernetes cluster, reducing time to manage each cluster while
eliminating the need for custom management code.

Featured products

Red Hat OpenShift on IBM Cloud

IBM Cloud Kubernetes Service

IBM Cloud Satellite

Benefits
The major benefits of a service mesh include capabilities for improved debugging,
monitoring, routing, security, and leverage. That is, with Istio, it will take less effort to
manage a wider group of services.

Improved debugging

Say, for example, that a service has multiple dependencies. The pay_claim service at
an insurance company calls the deductible_amt service, which calls the
is_member_covered service, and so on. A complex dependency chain might have 10
or 12 service calls. When one of those 12 is failing, there will be a cascading set of
failures that result in some sort of 500 error, 400 error, or possibly no response at all.

To debug that set of calls, you can use something like a stack trace. On the frontend,
client-side developers can see what elements are pulled back from web servers, in
what order, and examine them. Frontend programmers can get a waterfall diagram to
aid in debugging.

What the example does not show is what happens inside the data center—how
callback=parselLotamaAudiences calls four other web services and which ones
respond more slowly. Later, we will see how Istio provides tools to trace function
calls in a diagram much like this one.

Monitoring and observability


DevOps teams and IT Administration may want to observe the traffic to see latency,
time-in-service, errors as a percentage of traffic, and so on. Often, they want to see a
dashboard. A dashboard provides a visualization of the sum, or average, or those
metrics over time—perhaps with the ability to "drill down" to a specific node, service,
or pod. Kubernetes does not provide this functionality natively.

Policy

By default, Kubernetes allows every pod to send traffic to every other pod. Istio
allows administrators to create a policy to restrict which services can work with each
other. So, for example, services can only call other services that are true
dependencies. Another policy to keep services up is a rate limit, which will stop
excess traffic from clogging a service and prevent denial of service attacks.

Routing and load balancing

By default, Kubernetes provides round-robin load balancing. If there are six pods that
provide a microservice, Kubernetes will provide a load balancer, or "service," that
sends requests to each pod in increasing order, then it will start over. However,
sometimes a company will deploy different versions of the same service in
production.

The simplest example of this may be a blue/green deploy. In that case, the software
might build an entirely new version of the application in production without sending
production users to it. After promoting the new version, the company can keep the old
servers around to make switchback quick in the event of failure.

With Istio, this is as simple as using tagging in a configuration file. Administrators


can also use labels to indicate what type of service to connect to and build rules based
on headers. So, for example, beta users can route to a ‘canary’ pod with the latest and
greatest build, while regular users go to the stable production build.

Circuit breaking

If a service is overloaded or down, additional requests will fail while continuing to


overload the system. Because Istio is tracking errors and delays, it can force a pause—
allowing a service to recover—after a specific number of requests set by policy. You
can enforce this policy across the entire cluster by creating a small text file and
directing Istio to use it as a new policy.

Security
Istio provides identity, policy, and encryption by default, along with authentication,
authorization, and audit (AAA). Any pods under management that communicate with
others will use encrypted traffic, preventing any observation. The identity service,
combined with encryption, ensures that no unauthorized user can fake—or "spoof"—a
service call. AAA provides security and operations professionals the tools they need
to monitor, with less overhead.

Simplified administration

Traditional applications still need the identify, policy, and security features that Istio
offers. That has programmers and administrators working at the wrong level of
abstraction, reimplementing the same security rules over and over for every service.
Istio allows them to work at the right level—setting policy for the cluster through a
single control panel. At the same time, with Istio's access controls, dashboards, and
debugging tools described below, you can easily add a plugin at the command line,
rather than go to a web page.

Examples
Visualize services

Istio 1.1 includes a new add-on called Kiali that which provides a web-based
visualization. You can use it to track service requests, drill into details, or even export
the service request history as a JSON to query and format in your own way. The
workload graph below offers a real-time generated dependency graph based on the
services that actually depend on each other. It is generated from actual observations of
traffic.

Trace service calls

The Jaeger service, a component of Istio, provides tracing for any given service. In
this example, we’ve traced the product page. Ever dot in the first image represents a
service call. By clicking on a dot, we can “drill down” into the waterfall diagram to
follow the exact services requests and responses.

We can also look more closely at the product page. We can see the errors are in
product page itself—that details returned successfully.

Dashboards
Istio comes with many dashboards (out of the box) to monitor system health and
performance. These can measure CPU and memory utilization, traffic demand, the
number of 400 and 500 errors, time to serve requests, and more. Best of all, they are
available by simply installing and running Istio and adding Grafana, one of the
included open source dashboard tools for Istio. Istio also provides two other
dashboards: Kiali and Jaeger.

Istio Tutorials
The Istio website (link resides outside IBM) includes lots of helpful documentation
and instructions for getting started with Istio. 

Istio and IBM Cloud


An enterprise container platform, built around Kubernetes and open source
technologies such as Istio, provides orchestration across multiple public and private
clouds that unifies your environments for improved business and operational
performance. It’s a key component of an open hybrid cloud strategy that lets you
avoid vendor lock-in, build and run workloads anywhere with consistency,
and optimize and modernize all of your IT.

Take the next step:

 Deploy highly available, fully managed Kubernetes clusters with Red Hat


OpenShift on IBM Cloud, a managed OpenShift service that leverages the
enterprise scale and security of IBM Cloud to automate updates, scaling and
provisioning. Red Hat OpenShift on IBM Cloud includes an OpenShift Service
Mesh capability that uses the Istio control plane to control connections between
containerized services, enforce policies, observe behaviors and more.
 Gain improved control of your containerized applications with IBM
Cloud Kubernetes Service, which provides seamless installation of Istio,
automatic updates and lifecycle management of control plane components, and
integration with platform logging and monitoring tools.
 Deploy and run apps across on-premises, edge computing and public cloud
environments from any vendor with IBM Cloud Satellite, a managed
distributed cloud solution
 Simplify and consolidate your data lakes by seamlessly deploying container-
enabled enterprise storage across on-premises and public cloud environments
with IBM hybrid cloud storage solutions
 Make complex hybrid IT management simple with IBM Cloud managed
services.

Get started with an IBM Cloud account today.

What is supply chain management?


Supply chain management is the handling of the entire production flow of a good or
service to maximize quality, delivery, customer experience and profitability

Read IDC Spotlight: Benefits of the modern control tower

What is supply chain management?


Supply chain management is the handling of the entire production flow of a good or
service — starting from the raw components all the way to delivering the final product
to the consumer. A company creates a network of suppliers (“links” in the chain) that
move the product along from the suppliers of raw materials to those organizations that
deal directly with users.

How does supply chain management work?


According to CIO¹, there are five components of traditional supply chain management
systems:

Planning

Plan and manage all resources required to meet customer demand for a company’s
product or service. When the supply chain is established, determine metrics to
measure whether the supply chain is efficient, effective, delivers value to customers
and meets company goals.
Sourcing

Choose suppliers to provide the goods and services needed to create the product.
Then, establish processes to monitor and manage supplier relationships. Key
processes include: ordering, receiving, managing inventory and authorizing supplier
payments.

Manufacturing

Organize the activities required to accept raw materials, manufacture the product, test
for quality, package for shipping and schedule for delivery.

Delivery and Logistics

Coordinate customer orders, schedule deliveries, dispatch loads, invoice customers


and receive payments.

Returning

Create a network or process to take back defective, excess or unwanted products.

Why is supply chain management important?


Effective supply chain management systems minimize cost, waste and time in the
production cycle. The industry standard has become a just-in-time supply chain where
retail sales automatically signal replenishment orders to manufacturers. Retail shelves
can then be restocked almost as quickly as product is sold. One way to further
improve on this process is to analyze the data from supply chain partners to see where
further improvements can be made.

By analyzing partner data, the CIO.com post¹ identifies three scenarios where


effective supply chain management increases value to the supply chain cycle:

 Identifying potential problems. When a customer orders more product than


the manufacturer can deliver, the buyer can complain of poor service.
Through data analysis, manufacturers may be able to anticipate the shortage
before the buyer is disappointed.

 Optimizing price dynamically. Seasonal products have a limited shelf life. At


the end of the season, these products are typically scrapped or sold at deep
discounts. Airlines, hotels and others with perishable “products” typically
adjust prices dynamically to meet demand. By using analytic software, similar
forecasting techniques can improve margins, even for hard goods.

 Improving the allocation of “available to promise” inventory. Analytical


software tools help to dynamically allocate resources and schedule work
based on the sales forecast, actual orders and promised delivery of raw
materials. Manufacturers can confirm a product delivery date when the order
is placed — significantly reducing incorrectly-filled orders.

Key features of effective supply chain management


The supply chain is the most obvious “face” of the business for customers and
consumers. The better and more effective a company’s supply chain management is,
the better it protects its business reputation and long-term sustainability.

IDC’s Simon Ellis in The Path to a Thinking Supply Chain² defines what is supply
chain management by identifying the five “Cs” of the effective supply chain
management of the future:

 Connected: Being able to access unstructured data from social media,


structured data from the Internet of Things (IoT) and more traditional data
sets available through traditional ERP and B2B integration tools.

 Collaborative: Improving collaboration with suppliers increasingly means the


use of cloud-based commerce networks to enable multi-enterprise
collaboration and engagement.

 Cyber-aware: The supply chain must harden its systems and protect them
from cyber-intrusions and hacks, which should be an enterprise-wide concern.

 Cognitively enabled: The AI platform becomes the modern supply chain's


control tower by collating, coordinating and conducting decisions and actions
across the chain. Most of the supply chain is automated and self-learning.

 Comprehensive: Analytics capabilities must be scaled with data in real time.


Insights will be comprehensive and fast. Latency is unacceptable in the supply
chain of the future.
Many supply chains have begun this process, with participation in cloud-based
commerce networks at an all-time high and major efforts underway to bolster
analytics capabilities.

Explore supply chain management thought leadership articles

Evolution of supply chain management


While yesterday’s supply chains were focused on the availability, movement and cost
of physical assets, today’s supply chains are about the management of data, services
and products bundled into solutions. Modern supply chain management systems are
about much more than just where and when. Supply chain management affects
product and service quality, delivery, costs, customer experience and ultimately,
profitability.

As recently as 2017, a typical supply chain accessed 50 times more data than just five
years earlier.¹ However, less than a quarter of this data is being analyzed.  That means
the value of critical, time-sensitive data — such as information about weather, sudden
labor shortages, political unrest and microbursts in demand — can be lost.

Modern supply chains take advantage of massive amounts of data generated by the
chain process and are curated by analytical experts and data scientists. Future supply
chain leaders and the Enterprise Resource Planning (ERP) systems they manage will
likely focus on optimizing the usefulness of this data — analyzing it in real time with
minimal latency.

Supply chain consulting


With IBM Services, you can evolve your supply chain processes into intelligent
workflows, to reach new levels of responsiveness and innovation. Challenge siloed
processes to uncover efficiencies, enable your teams to execute and deliver, and use
emerging technologies like AI and blockchain to unlock opportunities in every step of
the value chain — from demand planning to order orchestration and fulfilment.

What is continuous deployment?


This guide explores the concept of a continuous deployment strategy and how it
supports enterprise scalability.
Continuous deployment is a strategy in software development where code changes to
an application are released automatically into the production environment. This
automation is driven by a series of predefined tests. Once new updates pass those
tests, the system pushes the updates directly to the software's users.

Continuous deployment offers several benefits for enterprises looking to scale their
applications and IT portfolio. First, it speeds time to market by eliminating the lag
between coding and customer value—typically days, weeks, or even months.

In order to achieve this, regression tests must be automated, thereby eliminating


expensive manual regression testing. The systems that organizations put in place to
manage large bundles of production change—including release planning and approval
meetings—can also be eliminated for most changes.

Continuous deployment vs. …


Continuous deployment vs. continuous delivery

While “continuous deployment” and “continuous delivery” may sound like the same
thing, they are actually two different approaches to frequent release.

Continuous delivery is a software development practice where software is built in


such a way that it can be released into production at any given time. To accomplish
this, a continuous delivery model involves production-like test environments. New
builds performed in a continuous delivery solution are automatically deployed into an
automatic quality-assurance testing environment that tests for any number of errors
and inconsistencies. After the code passes all tests, continuous delivery requires
human intervention to approve deployments into production. The deployment itself is
then performed by automation.

Continuous deployment takes automation a step further and removes the need for
manual intervention. The tests and developers are considered trustworthy enough that
an approval for production release is not required. If the tests pass, the new code is
considered to be approved, and the deployment to production just happens.

Continuous deployment is the natural outcome of continuous delivery done well.


Eventually, the manual approval delivers little or no value and is merely slowly things
down. At that point, it is done away with and continuous delivery becomes continuous
deployment.
See the following video from Eric Minick for more on the difference between
continuous deployment and continuous delivery:

Continuous deployment vs. continuous integration

Another key element in ensuring seamless, continuous deployment is continuous


integration. In order for automation of deployment processes to work, all the
developers working on a project need an efficient way of communicating the changes
that take place. Continuous integration makes this possible.

Typically, when working on the same software development project, developers work
off of individual copies of a master branch of code. However, functionality issues and
bugs can occur after developers merge their changes onto the main codebase,
especially when developers work independently from each other. The longer they
work independently, the higher the risk.

With CI, everyone merges their code changes into a repository at least once per day.
As updates occur, automated build tests run to ensure that any changes remain
compatible with the master branch. This acts as a fail-safe to catch integration
problems as quickly as possible.

For a closer look at how continuous integration differs from continuous delivery and
continuous deployment, see the blog post “Continuous integration vs. continuous
delivery: A quick explainer” and the video "What is Continuous Integration?":

Continuous deployment tools


To continuously develop and deploy high-quality software improvements, developers
must use the appropriate tools for building effective DevOps practices. Doing so not
only ensures efficient communication between both developmental and operational
departments but also minimizes or eliminates errors in the software delivery pipeline.

Here are some of the most crucial tools used in a continuous deployment workflow:

 Version control: Version control helps with continuous integration by tracking


revisions to a particular project’s assets. Also known as “revision" or “source”
control, version control helps to improve visibility of a project's updates and
changes while helping teams collaborate regardless of where and when they
work.
 Code review: As simple as it sounds, “code review” is a process of using tools
to test the current source code. Code reviews help improve the integrity of
software by finding bugs and errors in coding and help developers address
these issues before deploying updates.
 Continuous integration (CI): CI is a critical component of continuous
deployment and plays a major part in minimizing development roadblocks
when multiple developers work on the same project. A variety of proprietary
and open source CI tools exist, each catering to the unique complexities of
enterprise software deployments.
 Configuration management: Configuration management is the strategy and
discipline of making sure all software and hardware maintain a consistent
state. This includes proper configuration and automation of
all servers, storage, networking, and software.
 Release automation: Application release automation (or application release
orchestration) is very important when automating all of the activities
necessary to drive continuous deployment. Orchestration tools connect
processes to one another to ensure developers follow all necessary steps
before pushing new changes to production. These tools work closely with
configuration management processes to ensure that all project environments
are properly provisioned and able to perform at their highest level.
 Infrastructure monitoring: When operating a continuous deployment model,
it’s important to be able to visualize the data that lives in your testing
environments. Infrastructure monitoring tools help you analyze application
performance to see if changes you make have a positive or negative impact.

Working with Kubernetes


Kubernetes is a great open source solution to use when developing a continuous
deployment pipeline. Because of its flexible, logical, and intuitive user interface,
Kubernetes makes it possible to reduce the common problems that arise when running
into server usage restrictions and outages while supporting modern infrastructure
and multicloud deployments.

Kubernetes helps increase the agility of DevOps processes. Because of its modular


design, Kubernetes allows alteration of individual pods inside a service, as well as
seamless transitions between pods. This flexibility helps development teams avoid
server downtime and allows for maximum resource utilization when
running microservices. Kubernetes is also an extremely reliable platform that can
detect the readiness and overall health of applications and services before they’re
deployed to the public.

Continuous deployment across diverse applications


When creating continuous delivery or continuous deployment infrastructure, it’s
important to source the right enterprise solution that will give your business the
confidence it needs to automate software testing and deployment processes. IBM
UrbanCode Deploy is an application deployment automation platform that provides
the visibility, traceability, and auditing capabilities businesses need to drive their
software development needs in one optimized package.

Multicloud deployments

Using UrbanCode Deploy’s Easy Process and Blueprint Designer, organizations can
create custom cloud environment models to visualize how their applications should be
deployed to public, private, and hybrid cloud. Blueprint Designer allows users to
create, update, and break down full-stack computing environments while enabling full
cloud orchestration capabilities. All environments can then be provisioned to deploy
application components automatically or on demand.

Distributed automation

UrbanCode Deploy is a highly scalable solution that supports the dynamic


deployment of all mission-critical applications and services. Architected to meet the
unique requirements of enterprises deploying across multiple data centers, UrbanCode
Deploy supports master server clustering and uses lightweight deployments to provide
immediate availability of services.

Quality gates and approvals

Being able to rely on the accuracy of automated testing environments is absolutely


critical to successfully achieving continuous deployment. For some environments,
however, it’s necessary to create certain conditions that flag manual approvals to
ensure that the right information is pushed to production at the right time. UrbanCode
Deploy features deployment approvals and gates to give administrators more control,
visibility, and auditing capabilities over their continuous deployment processes.
Tested integrations

While UrbanCode Deploy supports the use of your own scripts, out-of-the-box
plugins make deployment processes easier to design and manage. By using tested
integrations, developers can utilize pre-built automation that has already been proven.
This replaces the need to create custom scripts specifically for UrbanCode Deploy.

IBM UrbanCode Deploy features advanced process orchestration and collaboration


tools that make it possible for enterprises to organize all of their deployment needs in
one easy-to-use, customizable dashboard. Whether deploying applications on-premise,
off-premise, or across thousands of managed servers, UrbanCode Deploy gives you
all the solutions you need to ensure continuous delivery and rapid deployment across
your entire enterprise.

To learn more about IBM UrbanCode Deploy and how it can evolve your deployment
process, explore IBM’s deployment automation solution.

Continuous deployment and IBM Cloud


The ability to release code changes automatically into the production environment can
help dramatically speed time to market. You can do this with IBM tools as well as
integrations with third parties and open source plugins. IBM processes and tools can
help you with one of the most challenging DevOps initiatives organizations face—
building and modernizing applications on the journey to cloud.

Take the next step:

 Adopt effective DevOps practices by using open toolchains to build, deploy


and manage your apps on the cloud.
 Get started developing a continuous deployment pipeline with IBM Cloud.
 Learn about IBM UrbanCode Deploy, an application deployment automation
platform that provides visibility, traceability and auditing capabilities.

Get started with an IBM Cloud account today.      

You might also like