Over the past decade virtualization technologies have gone from educational tools to full blown IT solutions. Virtualization has many benefits, among them application isolation and utilization of hardware resources are most valuable cause it gives more operational flexibility. Many organizations invest their money in virtualization technology. Virtual environment is easier to maintain as they do not require software installation and updates on individual computers, which save both time and money. But there is resource overhead in conventional virtualization technology. On the other hand, Containerization or container based virtualization consumes less resources, perform faster for its design nature. Application deployment is much faster in a container then virtual machine. Company like Google, Facebook, Twitter, AWS, PayPal, BBC uses container technology in their environment.
Will this be a threat to the conventional virtualization technology? Do we need to invest again for containerization? Is the migration possible or do we need the migration from VM to container? Before answering these questions, let’s see what is virtualization and what is containerization and what are the benefits and drawback of virtualization and containerization.
Virtualization is the process of creating a virtual version of something, such as a server or computer system, using software instead of hardware. The evolution of virtualization greatly revolves around one piece of very important software. The hypervisor. And what is a hypervisor? The hypervisor is a piece of software which allows physical hardware like ram, cpu, hard disk, network card, etc, to share their resources amongst virtual machines running as guests on top of that physical hardware. There are two types of hypervisor, Type I or bare-metal hypervisor and Type II or hosted hypervisor.
Also called container-based virtualization is an OS-level virtualization method for deploying and running distributed applications without launching an entire VM. Containers do not require a hypervisor and therefore provide better performance than applications running in virtual machines. They share the host system’s kernel with other containers. They are image based which is lighter than the full operating system. Images has online repository and you can make your own customize image.
VM AND CONTAINER:
A Virtual Machine (VM) is an emulation of real computer that executes programs like a real computer with the help of “Hypervisor”. Like a real machine if you want to run a VM you need to install a full set of operating system. Each VM has its own kernel.
Containers are image based, isolated, a runnable instance of an image, lighter than VM and share the host kernel with another container.
The goal of the container and VM are same, which is to isolate an application and their dependencies into a separate unit, which can be run anywhere. For this reason, some say that the containers and VMs are same or container is the lighter version of the VM which is not true. They have different architecture. Let’s explain the differences between container and VM with an analogy.
Think of a VM as an individual grocery shop. It has a door where you can enter, it has its own sign board where the name of the shop is written, it has an infrastructure for electricity, plumbing, air-conditioning, etc. all are managed by the shop owner. There’s nothing he/she need to share. If the shop owner wants to start another shop, it needs to start from the beginning. Like build the wall, installing the electrical setup etc, just like VM, if you want to launch a new VM you have to create the VM then install the full operating system.
The container is like a shop in a shopping mall. They may vary in size and they may sell different products, but all the shops inside that mall share the same electrical, plumbing, air-conditioning, etc. system. they are built around shared infrastructure. If a new owner wants to launch a shop, he/she just need to decorate and open the shop and don’t actually bothered about the infrastructure because it’s already there. Launching container is like that, you just need to launch it, it will download the images (for the first time launch, after that it will launch it from the local host) which are lighter than a full set of OS and launch. There’s no need to install a full operating system because the container will share the host kernel and other resources.
WHO ARE THEY:
The most used container name is the Linux Container or LXC. There are others like OpenVZ, Linux-VServer, Rocket, Red Hat OpenShift, FreeBSD jails etc. Docker by Docker Inc. and LXD by Canonical are the most common name in the container world. But the truth is none of them are container. Docker is an application which uses container technology, mainly LXC. According to Canonical, LXD is a pure container hypervisor, which also use the LXC features. VMware workstation, VirtualBox, VMware ESXi, Citrix ZEN, KVM, Microsoft Hyper-V etc. all are Type I and Type II hypervisor.
The container is much faster than VM or virtual machine. In virtualization, you need to install a full set of operating system for running a VM. Every VM has its own kernel in a virtualization environment. Installation, boot time is relatively high in virtualization. It takes minutes to start a VM.
In containerization each container shares the host kernel. Containers are image based and they do not need to install the full set of operating system. Once downloaded from the online repository, it is stored in the local machine, it uses that image for launching the container. An image is like a templet and read only. You cannot change the base image, but using that image you can run container or can make your own image. Because it’s share the host kernel, creation and boot time is relatively lower in containerization. The container takes seconds to boot. Starting a container is just like to start a process in the host system.
Remember the grocery shop and the shopping mall analogy, I gave earlier.
Containers are implemented using the namespaces and control groups (cgroups) features in the Linux kernel. As a result, it uses the security features provided by the namespaces API. LXC support Linux security module (LSM) like AppArmor and SELinux. LXD currently supports only AppArmor. All containers are isolated from other container and the host system. But you can connect them if require.
Virtual machines are isolated from other virtual machines so basic security is already implemented. As it requires a full OS so security feature can enhance easily in a virtual machine. It is also possible to provide security at the hypervisor level.
Investment is a major factor for implementation. How much will you invest and is the investment is feasible for the projects are the main concern for the investors. To implement these two technology some resources are mandatory like hardware, so cost is involved in this area. The Hypervisor is a requirement for virtualization but not for containerization, so you can skip the hypervisor cost if you want to deploy containerization. For the host or guest OS it depends on what types of OS you are using like open source or licensed. For licensed OS cost is a big factor because you have to provide a license fee for the host OS and also for the guest OS. The same is also true for the Application which will run inside the VM. Containers are image based and images have both type free and paid, so cost is depending on the types of images you are using. Docker comes in two flavor Docker Community Edition (Docker-ce) and Docker Enterprise (Docker-ee) Edition. The Docker community edition is free but if you want to use Docker Enterprise you have to invest. There is other factor involved regarding cost and it’s also vary according to requirements, organization. Here I just try to give a basic idea about what common factors can consider during budget planning.
SNAPSHOTS AND BACKUP:
There is also another question regarding snapshots and backup of the container and VM. VM has the feature to take a snapshot of the running machine and can be restored again and run from that stage. LXD has snapshots feature just like VM so backup can be done just like VM and backup and restore time is much faster than the VM. For Docker you actually don’t need to back up the container. The data doesn’t live in the container it lives in a named volume that is shared between multiple containers that you define. Backup the data volume, and forget about the container. You can run your container again and attach the volume any time within a minute.
CONTAINER OR VMS:
Virtualization are widely implemented around the globe and investment is quite high in this area so implementing containerization in a virtualized environment throws some common question like Is the migration possible or, do we really need the migration? Or do we have to invest again? What benefit it will give us, etc.
For these two technology there are some benefits and also have some drawback. Application deployment in a container is much faster than the VM because container shares the same host kernel. The major drawback of this kernel sharing is you cannot run windows on Linux or Linux on windows because they never share their kernel. It Is possible to run container inside a VM without any problem. Yes, there is some overhead, but you can run container smoothly inside a VM. So migration is not a mandatory
If you need to run your same application multiple copy, then container is a good choice rather than VM. In the other hand, if you have to run multiple application, then the VM is good for you. Finally, if you get board of the installing operating system, then it’s your time to try the container, you don’t need to install the container in your own machine because there are online resources for practicing container. See the online resources section if you are curious about the container. ■