Containers - Challenges and Strategies

To begin with, why Containers?

Container is the solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer's laptop to a test environment, from a staging environment into production, and perhaps from a physical machine in a data centre to a virtual machine in a private or public cloud.

Container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerisation the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

Major benefit is that virtual machines may take several minutes to boot up their operating systems and begin running the applications they host, while containerised applications can be started almost instantly. Also, Containerisation allows for greater modularity. Instead run an entire complex application inside a single container, the application can be split into modules (such as the database, the application front end, and so on). This is bit of microservices approach. Applications built in this way are easier to manage because each module is relatively simple, and changes can be made to modules without having to rebuild the entire application. Because containers are so lightweight, individual modules or Microservices can be instantiated only when they are needed and are available almost immediately.

Key factors and principles for Containerisation.

One code base and tracked in revision control - There should be always a one-to-one correlation between the code base and the app”. The app would correlate to a container, probably a microservice.

Always store config in the environment - The container is the unit of immutable deployment and the config values provided by an environment are the wiring that plugs it in to the rest of the system.

Consider backing services as attached resources - when it comes to databases, queues and file storage, containers nudge you further away from storing data and state in the app. In a container, it can appear and disappear by the bucketload from moment to moment and so, it becomes important to externalise data storage. This helps in delegating the operation of data storage components to the cloud provider

Declare and isolate dependencies - this principle is well supported in a container as app never relies on implicit existence of system-wide packages, which is very much what container isolation gives. A container really can contain only the files needed to do its work.

Scale out via the process model – The approach is to have concurrency and scaling, which is horizontal or scale out. With a stateless design, one should normally be able to run multiple copies of it to provide additional capacity. This principle is a default assumption in Docker Compose, Docker Swarm and Kubernetes

Export services via port binding – The contract with the execution environment binds to a port to serve requests as all the gubbins needed to respond to requests is included in the container image and the only way to connect to the service is via a port on the container. The thing about containers is that each one has a separate address space, so there’s potentially even less effort needed to figure out which port to bind to.

Should be a rule to separate build and run stages - Build an image, push it to a registry and then pull it to a target machine to run as a container.

Maximise robustness with fast start-up and graceful shutdowns - The principle of disposability is a corollary to the ability to scale out. If one can start an additional dozen copies of a microservice, one would want to design for the ability to take instances out of service, whether for reduced capacity or rolling updates. Actually, start-up time is a serious point of friction that usually ignored in architecture conversations, particularly where there’s a preference for using the JVM, especially with the Spring framework. Slow starts could destroy both development cycle time and operational agility whilst loading up the cloud hosting bill.

Development, staging and production should be similar – If we are building an image once, in a predictable way, and deploying it to the environments and this factor helps in ensuring the running the same build in each environment. 

Consider Docker for Containerisation.

Why Docker?

Docker is a tool that is designed to benefit both developers and system administrators, making it a part of a many DevOps toolchain. For developers, it means that they can focus on writing the code without worrying about the system that it will ultimately be running on. It also allows them to get a head start by using one of thousands of programs already designed to run in a Docker container as a part of their application. For operations team, Docker gives flexibility and potentially reduces the number of systems needed because of its small footprint and lower overhead.

Talk to us and lets strategies your Microservices and Containerisation plans.

Anil Kanwar

Data in Motion , Sales Leader - India at Confluent

4y

Very well articulated Kris Challa . Confluent Kafka is also helping all these microservices , check out how Confluent can help them .

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics