Long before cloud computing – in the early days of IT virtualization – I was developing and deploying software and applications. Each implementation required its own fixed hardware, dedicated servers and operating system (OS) environments. In those days, multiple applications in development competed simultaneously for CPU, memory, data storage and network resources.

These environments clearly suffered from limited scalability. And any necessary patches, updates or upgrades triggered downtime for all running applications. Because we had to support multiple environments (test, development, production) – and deploy on more than one OS (Windows, Unix, Linux) – things quickly became complex, expensive and time-consuming.

Thanks to containers – a relatively new technology – many of these issues have been addressed. But before we delve into how containers work and why they matter, let's look at how we got where we are today.

Rise of the virtual machines

Addressing the challenges of scalability, complexity and cost and gave rise to IT virtualization and the virtual machine (VM). This newer approach provided functionality to emulate an operating system and used a hypervisor to share and manage hardware. So you could isolate multiple environments from one another while they were running on the same physical machine. In addition to its host operating system (the server the VM ran on), a virtual machine required each of its applications to have its own OS running. And it was all managed by the hypervisor.

Patches, updates or upgrades still triggered downtime for VM-hosted applications. But VMs virtualized the underlying hardware so that multiple instances of the operating system could run on the same hardware. As a result, businesses could use physical resources more efficiently and cost-effectively.

Despite being virtual, VMs are not lightweight. They're composed of the entire operating system image, applications, libraries and other dependencies, so they can become quite large. And if you're running several VMs on the same server, multiple OS resource usage easily becomes excessive. While you can increase resource utilization at the hardware layer by running multiple VMs on large physical hardware, that creates an additional administrative layer in the VM hypervisor.

Next-generation virtualization: containers

So, what's the next generation of IT virtualization? Containers. The key differentiator for containers is their minimalist nature.

Unlike VMs, containers don't need a full operating system to be installed in the container. This new virtualization method packages only an application and its dependencies. A container virtualizes the underlying operating system, so the containerized application perceives that it has the OS all to itself. That includes CPU, memory, data storage and network connections. But the host OS constrains the container’s access to physical resources – and that prevents a single container from consuming all of a host’s resources.

Containers create an isolation boundary at the application level rather than at the server level. This isolation means that if anything goes wrong in a container (like a process consuming excessive resources), it only affects that individual container – not the whole server.

Since containers share the host operating system, they don’t need to boot an OS or load libraries. This makes containers much more efficient and lightweight than a VM. The shared OS approach has the added benefit of reduced overhead when it comes to maintenance, such as patches, updates or upgrades. Containers effectively remove the dependency of applications from their underlying hardware and network infrastructure. This, in turn, unlocks scalability and availability options that increase resource flexibility and utilization.

Since containers don't have the overhead that's typical of VMs, the same infrastructure can support many more containers. This makes containers a cost-effective, lightweight alternative to VMs. They deliver a streamlined, easy-to-deploy and secure method of implementing specific infrastructure requirements. But containers and VMs are not competing technologies. Containers run equally well on virtual and physical machines.

Containers

  • Are minimalistic, efficient and lightweight.
  • Won’t use up all the host’s resources.
  • Run anywhere and require low overhead.
  • Are scalable, flexible and highly available.
  • Simplify development and reduce cost.

Build it once, deploy it anywhere

Containers have become the preferred approach to managing individual environments with multiple dependencies. Because the differences in underlying OS and infrastructure are abstracted, the container can be deployed and run anywhere.

With containers, it no longer matters if the application was developed in a test environment. You can quickly migrate it to a development or production environment without worrying about underlying dependencies that complicate conventional application deployment.

The containerized application works no matter where it's installed. That's because containers make it easy to share CPU, memory, data storage and network resources at the OS level by abstracting the application from the environment in which they actually run. It also eliminates compatibility problems between containerized applications that reside on the same operating system. And by providing a standardized format for packaging – and holding all the components needed to run the application – you gain portability between different OS platforms. This ensures consistent execution from one deployment to another.

Along with other benefits, using containers reduces the time and resources spent on DevOps. Consequently, developers can adjust separate features without affecting the entire application.

Containing the cloud

A big part of the symbiotic relationship between big data and the cloud is its IT service delivery options for software, platform and infrastructure. Many of these rely on containers, within which the application code can be separated from the underlying IT infrastructure. That provides a compact and efficient virtualization layer that has become the standard for application deployment in the cloud. Because with a container, it doesn’t matter whether you’re hosting applications on a private data center or a public cloud. It works wherever.

Containers bring speed, agility and scale to cloud deployments, especially for advanced analytics. You can immediately deploy cost-efficient cloud deployments of a preconfigured container with core analytical components – to which it's easy to incorporate additional layers.

Preconfigured containers, coupled with access to data sources, make the power of advanced analytics affordably accessible to data scientists and other analytics users. And it helps your IT staff to efficiently provision resources for a wide variety of business requirements.

Learn more: Read about SAS for containers
Share

About Author

Jim Harris

Blogger-in-Chief at Obsessive-Compulsive Data Quality (OCDQ)

Jim Harris is a recognized data quality thought leader with 25 years of enterprise data management industry experience. Jim is an independent consultant, speaker, and freelance writer. Jim is the Blogger-in-Chief at Obsessive-Compulsive Data Quality, an independent blog offering a vendor-neutral perspective on data quality and its related disciplines, including data governance, master data management, and business intelligence.

Related Posts

Leave A Reply

Back to Top