Containerization and Cloud-Native Computing ExplainedMany IT professionals today still have little to no knowledge of what cloud-native computing is, how it is structured and how it’s transforming the cloud industry. That’s quite understandable, with all the rapid changes going on. Following mature concepts like server virtualization, cloud-based computing and DevOps, cloud-native and containerization are the new kids on the block.

But what do they actually do? How can we make best use of them? As if all the vertical market segments of cloud computing were not confusing enough already! It’s also of import to understand there is a difference between ‘cloud-based’ and ‘cloud-native’. It’s a nuance, but a very important one.

Let’s dive in.

Natural Progression

Way back when, physical hardware, memory, and disk space were all necessary when using a single application. Then came virtual machines – a welcome step towards flexibility. Virtual machines allow you to have a variety of application workloads using different OS versions running in virtual instances on the same physical server.

The game-changer for IT departments and software developers was – and still is, that virtualization also enables you to run multiple applications on the same physical server – meaning you can pool together the unused chunks of CPU, memory, disk space and network space on a near-infinite number of servers and put it to better use.

If an application only used, say, only 40% of the available CPU capacity – virtualization could divide the remaining 60% of space and redeploy it to run other apps. The origins of virtualization is as a mainframe architecture from a long time ago applied to x86-based client/server architectures.

This ‘cloud-based’ development approach uses traditional browser-based tools and technologies to develop applications and workloads and deploy them to hybrid compute resources of choice. Cloud-native and containers take this to another level – allowing you to run a maximum number of applications on a minimum number of servers.

What Is A Container?

We all know what containers are in our lives. They hold stuff – to explain it very banally. Application containers are lightweight and standalone holders for preparing software to be deployed on shared computing resources.They can be mounted and deployed on compute resources located anywhere in a hyperscale public cloud or on-premise virtual infrastructure. The benefits of containers are their ability to run an application, (or a piece of it), modularly in an isolated, secure environment.

Thus servers and containers are two different animals. With containers, the virtual or physical server takes on less relevance. Sure, all applications run on [virtual] hardware at the end of the day, but the logical entity, the containerized application, is leading in cloud-native computing. They are modular and can be freely scaled and moved around. A container doesn’t care if it runs on hybrid infrastructure resources like VMware, OpenStack, Azure, AWS, Google Cloud or on a bare-metal server. Before containers, this was how cloud-based computing was done. These IaaS-based resource platforms required so-called ‘lift and shift’ types of migrations, which are technically very challenging, even with good tools that now exist in that space. Containers eschew this approach and can be more easily mounted and run anywhere.

A Container includes the following subcomponents:

  • Virtualized instance of the OS kernel
  • Your application (or a piece of it!)
  • Middleware
  • Device Drivers

Containers don’t care if something’s virtualized; they just see a landing path. And instead of virtualizing a physical server’s resources, for example – as VMware does, a container virtualizes an operating system – giving the user a secure instance of the core of an operating system – otherwise known as an OS kernel. When bundled with other software, like middleware, device drivers, etc. it makes an application runnable.

A container can deploy on bare metal, virtual machines, or in a cloud. And to orchestrate containers, you use a platform such as the popular Kubernetes.

What Is Cloud-Native?

Let’s start with a definition from The Linux Foundation, which spawned the Cloud Native Computing Foundation (CNCF) in 2015: “Cloud native computing uses an open-source software stack to deploy applications as microservices, packaging each part into its own container and dynamically orchestrating those containers to optimize resource utilization.”

The cloud-native approach is an agile practice and methodology – using a combination of container orchestration platforms such as Kubernetes, leveraging a microservices architecture and DevOps to build and deploy containerized applications. Microservices are portions of an application running in their own container. They are modular, can be developed independently, are scalable up and down – on-demand, representing the best in cloud computing.

Is there a Common Thread to Tie it Together?

Wow, that’s a lot of stuff! In summary, it’s a simple equation:

*Cloud-Native = Containers + Microservices + Cloud Infrastructure + DevOps + Continuous Delivery. * Now, the challenge is how to tie that all together with automation of the orchestration workflows to manage such hybrid approaches centrally using agnostic tools with hybrid cloud resources. You need an ‘orchestrator of orchestrators’ with integrated business process automation. That’s what a Cloud Management Platform still does for you. It’s a common thread through these concepts and methodologies.

Cloud Management Platforms

Businesses want to automate how they deploy their containers, where they deploy them and what software they deploy them with. If you currently have a Cloud Management Platform (CMP) that’s managing how DevOps CM tools are connecting and creating container deployments using Kubernetes – you’re far ahead of most organizations today. You may only be using CM tools to deploy to cloud-based resources (not cloud-native). You still will benefit greatly by using a CMP to link it together.

The role of Cloud Management Platforms comes down to assisting how IT operations and development teams deploy applications and services, which can be via:

  • Simple Infrastructure-as-a-Service (IaaS) – on-premise or in a hyperscale Public Cloud VM
  • Traditional single-tier or multi-tier non-containerized Platform-as-a-Service (PaaS)
  • Containerized applications – deployed on-premise or in a hyperscale Public Cloud
  • Cloud-Native

Cloud Management Platforms will not replace cloud-native management tools nor compete with container orchestrators like Kubernetes, or Ansible on the DevOps side. A CMP augments their capabilities and adds process workflows around them. There is also no ‘Big Bang’, where all organizations go immediately to cloud-native architectures for deployment. Make no mistake though, cloud-native is the way forward. The world of cloud is itself hybrid. Traditional hybrid clouds (sounds a bit funny to say that!), consisting of on-premise and hyperscale Public Cloud resources will continue to exist – and these must now co-exist with cloud-native for years to come. How to orchestrate and manage these diverse worlds? That’s where we come into play.

Use Cases: Cloud-based and Cloud-Native

*Use Case #1: *Here is an example of a simple use cloud-based use case for our CloudController CMP. A CMP can deploy traditional applications and platforms as a service on CMP-managed resources. CloudController has a Service Catalog Manager from which cloud tenants can deploy virtual machines (IaaS) – plus orchestrate how application stacks are deployed on them to make PaaS. CloudController does this via its scripting module that allows admins to insert, attach and auto-execute CM modules and scripts such as an Ansible Playbook, Microsoft PowerShell script or Puppet Module, etc. on virtual machines while they’re being rolled out, upgraded/downgraded or deactivated/destroyed. This is standard out-of-the box functionality of our CloudController CMP today.

*Use Case #2: *Another more comprehensive use case is that a CMP can automatically query and look into the repository of Kubernetes, Ansible, Puppet, etc. via REST calls and then manage how container and CM tooling orchestrations are collected, integrated and executed using value-added business process workflows. They are built right into Service Catalog Item templates. This means that you can automatically build the container platform and your hybrid cloud and/or cloud-native deployment environment directly from the same unified Service Catalog Manager. Each orchestrated deployment is then just another ‘service object’ that is orchestrated, deployed and managed by the CMP.

You can imagine for example that your Kubernetes cluster nodes have several container orchestrations in a repository that can be invoked to build and deploy certain types of containers. The microservices can then be built and containerized using a CM tool, for example.

If your application consists of several microservices, you only want to invoke them when their deployment has to occur – in the right sequence – to the resource of choice, using the CM tool of choice. The CloudController CMP will remove the intricacies of orchestrating these types of orchestrations as this whole process will be fully automated when our new Container-as-a-Service (CaaS) automation module will be generally available later in 2020. Now you know our trick! 😉 There are so many use cases for how CM tooling can be used with Kubernetes – or not. Let’s leave that topic for another deeper technology blog!

From Hours To Minutes

How would this fit into an IT Ops team? Take the example of one of our customers, a large oil and gas company, that runs 3500 workloads on various hybrid cloud resources. 70% of their deployments are currently simple IaaS, and the majority of the remaining 30% are relatively easy single-tier cloud-based PaaS. They know, however, that a transition to cloud-native looms on the horizon in 12-24 months.

The company’s significant challenge was that it took them too long to deploy any workload or platform. Their operations team of 10 people manually runs the deployment and post-deployment operational management. To complicate things, the number of workloads is growing steadily at 10-15% per year. Deploying workloads and platforms took them best-case 4-12 working hours – with no coordinated governance, accountability, auditing, and little operations automation. That encompassed resource provisioning, application deployment, set-up, etc. The whole continuous deployment and roll-out process.

With each team member costing them an average of USD $100,000 a year, running and maintaining the environment is a costly endeavor at over $1 million each year just for having the right skills in-house. Add to that the cost of tooling platforms, etc.

Using CloudController to run and operate a hybrid cloud can deploy workloads in 15 minutes rather than hours, with full accountability, process auditing and cost information integrated in the CMP. They can also see how many resources are used by each workload, each user, by day, week, month, or year, etc. – giving them complete insight and control over their services, resource quotas and costs, which can save a large organization such as theirs millions. CloudController orchestrates hybrid cloud deployments, tying together the infrastructure, platform, and cost control aspects in a highly automated fashion.

By the end of the year, when we will also offer full support for container and cloud-native management orchestrators via our CaaS module, this will facilitate their pending transition to cloud-native. They will be ready. Will your business be ready?