Docker Introduction

The docker website describes docker this way: “Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux, Windows Server, and Linux-on-mainframe apps.”

You can check the version of Docker you have installed with the following command from a terminal prompt:

docker --version

Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer’s laptop to a test environment, from a staging environment into production, and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud. The network topology might be different, or the security policies and storage might be different but the software has to run on it.

Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

Containers vs. Virtualization

With virtualization technology, the package that can be passed around that includes an entire operating system and the application. A physical server running three virtual machines would have a hypervisor and three separate operating systems running on top of it. By contrast a server running three containerized applications with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system are read only, while each container has its own mount (i.e., a way to access the container) for writing. That means the containers are much more lightweight and use far fewer resources than virtual machines.

Docker has become synonymous with container technology because it has been the most successful at popularizing it. But container technology is not new; it has been built into Linux in the form of LXC for over 10 years.

Docker runs on Linux (and some flavours) and on Windows. In 2016 Microsoft introduced the ability to run Windows containers in Windows Server 2016 and Windows 10. These are Docker containers designed for Windows, and they can be managed from any Docker client or from Microsoft’s PowerShell. Microsoft also introduced Hyper-V containers, which are Windows containers running in a Hyper-V virtual machine for added isolation.

Channels are the next topic that you might look at. You can isolate data with channels. A peer cannot do anything unless they are part of a channel. All peers in a channel must agree to have the other included in the channel. Each peer can be part of many channels. The Hyperledger Fabric website defines a channel this way: “A Hyperledger Fabric channel is a private “subnet” of communication between two or more specific network members, for the purpose of conducting private and confidential transactions.”

What is chaincode? Chaincode is smart contract. Chaincode is a program. It reads and updates the ledger data. All of your business logic is inside the chaincode. Currently, the chaincode is written in Go Language, or “golang”.

Leave a comment

Your email address will not be published. Required fields are marked *