The trouble with many lines of business applications is that they often have a habit of wanting solitary ownership of the computer on which they are running. On your personal PC, this is relatively harmless; you’re usually single-tasking, focusing on that application, and working with the data and functionality it provides. Resource consumption, such as storage, is fairly easy to monitor and control. When you want to install or uninstall the app, the effects on other parts of your personal computer are fairly easy to predict and manage.
Things start to get trickier when we turn to networked applications which are accessed by multiple users at the same time. Not only can it be difficult to manage all the various configurations needed to serve different users’ individual settings. But dependencies on things like runtimes, libraries, components, and even file and folder permissions can balloon into an IT management nightmare.
Often, when a significant new application is adopted, a corporate IT department would opt for a new server on which to house it.
In this article, we’ll walk you through if virtual machines solve server complexity, what’s wrong with using virtual machines or virtual servers, are containers a better option than virtual machines, how containers began, the difference between Windows and Linux containers, the definition of Kubernetes, how to get started with docker, the definition of OCI, what’s inside a docker image, and what Docker’s role is for developers who use Windows tool for developers.
Table of Contents
Do virtual machines solve server complexity?
VMware, among others, stepped into this arena by offering the concept of a ‘virtual’ machine or ‘VM’. This technology allowed IT departments to safely and securely run multiple business applications in a virtual ‘guest’ server all to themselves, all running on a single ‘host’ server. The great advantage of VMs is that a single server could offer a virtual operating system environment which appeared, to the application, as if it were completely dedicated to its requirements. If a problem occurred only the VM would need to be restarted or even restored, leaving the other virtual machines untouched and often blissfully unaware of the drama. The VMs don’t even need to be running the same operating system since each virtual environment is completely distinct from any others and even the host.
What is wrong with using a virtual machine or virtual server?
Virtual machines are not a perfect solution. For instance, each VM requires a complete copy of its own operating system. It needs to be a complete copy, even if it’s running the same operating system as the server on which it is hosted. In addition, every virtual machine consumes CPU, RAM, and other requires resources and maintenance activities like patching, monitoring and licensing.
How are containers a better option than virtual machines?
Larger web-scale companies like Google have turned to using container technologies to address some of the limitations of the VM model.
The container is roughly analogous to the VM. The major difference is that containers do not require their own operating system. As a matter of fact, all containers on a single host share the host’s operating system. This frees up huge amounts of system resources such as CPU, RAM, and storage.
Containers are also fast to start and ultra-portable.
How did containers begin?
Modern, containers started in the Linux world with Google contributing many container-related technologies to the Linux kernel. Despite that, for many organizations and individuals, it remained complex until Docker came along. Docker effectively democratized container technology and made it more accessible to use.
Over the past years, Microsoft has worked closely with Docker Inc. and the open-source community to bring containers to the Windows OS. Windows kernel technologies are required to implement containers that are collectively referred to as Windows Containers. The user-space tooling to work with these Windows Containers can be Docker and this gives almost the same experience as Docker on Linux.
What is the difference between Windows and Linux containers?
A running container shares the host machine’s kernel it is running on. This means that a containerized Windows app will not run on a Linux-based Docker host and vice-versa. Linux containers require a Linux host, Windows containers require a Windows host.
There is no such thing as a Mac container. You can’t run a container which is a mini version of macOS. However, you can run Linux containers on Mac seamlessly.
What is Kubernetes?
Kubernetes is an open-source project by Google. It has emerged as the de-facto orchestrator of containerized applications. In simple words, Kubernetes is the most popular tool for deploying and managing containerized apps.
Before Container Runtime Interface (CRI) for Kubernetes, Docker was the main underlying runtime. Kubernetes is going to deprecate Docker as a container runtime after version 1.20
But there is no need to worry. Docker is continuous as it is. If you’re using a managed Kubernetes service like GKE, EKS, or AKS you will need to make sure your worker nodes are using a supported container runtime before Docker support is removed in a future version of Kubernetes.
How can I get started with Docker?
Docker is software that runs on Linux and Windows. It creates, manages, and can even orchestrate containers.
The word “Docker” comes from a British expression meaning dock worker that means somebody who loads and unloads cargo containers from ships.
Docker has three main layers as shown in the figure below.
- The runtime is responsible for starting and stopping containers and works at the lowest level.
- The low-level runtime is called runc. Its job is to interface with the underlying OS and start and stop containers
- The higher-level runtime is called containerd. It does a lot. For instance, it manages the entire lifecycle of a container, including pulling images, creating network interfaces, and managing the low-level runc instances.
- Docker daemon sits above continerd and performs higher-level tasks, such as exposing Docker remote IP, managing images, volumes, networks, and more.
- Docker swarm is a native technology for Docker to manage clusters of nodes. But most developers utilize Kubernetes.
What is OCI?
OCI (Open Container Initiative) is a governance council responsible for standardizing the low-level fundamental components of container infrastructure. Docker has grown substantially over time and is now used in a wide variety of different use cases.
A company called CoreOS decided to offer what they considered to be a more open policy with less reliance on Docker Inc. CoreOS designed an open standard called appc which defined elements like image format container runtime. They also produced an implementation of the spec named rkt. This placed the container ecosystem in a difficult position with two competing standards.
These two major OCI specifications have had a big impact on the architecture and design of the core Docker product. As of Docker 1.11, the Docker Engine architecture conforms to the OCI runtime spec.
When you install Docker, you get 2 major components:
- Docker client
- Docker daemon or engine
The engine implements the runtime, API, and everything else required to run Docker. It is practical to think of a Docker image as an object. That contains an OS filesystem, an application with its dependencies. Getting images onto your Docker host is called pulling. To see what image you pulled you can execute this command:
1 |
$ docker image ls |
What is inside of a Docker image?
You can learn inside out about this, but here we are going to briefly describe it. You can learn inside out about this, but here we are going to briefly describe it. So, an image contains enough of an operating system along with all the code, and the required dependencies to run whatever application it is designed for. For instance, if you pull an Ubuntu image, it has an Ubuntu Linux filesystem, including its core utilities.
Here are useful Hands-on Docker Tutorials
After pulling an image locally, you can use this command
1 |
$ docker container run -it ubuntu:latest /bin/bash |
This run command tells the Docker daemon to start a new container based on the image we have locally. The -it flag instructs the Docker to create the container interactive and assign the current shell to the container’s terminal.
What is the Role of Docker for RAD Studio Developers?
We all know that RAD Server with RAD Studio gives you enormous power for your business. You can rapidly build and deploy services-based applications in no time.
RAD Server provides pre-built Docker images for RAD Server on Linux, hosted on Docker Hub. RAD Studio hosted on Docker helps to:
- Deploy a RAD Studio Linux console from the RAD Studio IDE and view PAServer’s output.
- Deploy and install a custom RAD Server resource module from the RAD Studio IDE via PAServer.
- Cluster multiple RAD Server Docker container instances side by side.
- Build a child docker container based on one of the existing RAD Server Docker images as a parent that contains customs RAD Server resource modules.
You can check out the Docker Hub here: https://hub.docker.com/u/radstudio
Here you can follow official documentation and several tutorials about how you can utilize it.
You can learn more about RAD Server in the following post by David “I” Intersimone.
Design. Code. Compile. Deploy.
Start Free Trial Upgrade Today
Free Delphi Community Edition Free C++Builder Community Edition