Docker's No Nonsense Tour
Before docker was a thing, developing, testing and shipping software was a huge miss and it is still a miss afterward, don’t get me wrong but at the end of this article, you might appreciate how docker made the process more bearable, those are some of the technologies used to develop software without using docker:
- Virtual Machines
- package management systems
- config management tools
- a pile of interconnected dependencies.
Meaning each team member of the different teams involved in the development process would have to get that huge pile of miss to work locally, you could imagine how painful that would be, that’s where docker comes into play, where it gives the ability to contain the entire software stack and published it(publicly or privately) as an image for the members to use, by use I mean just running a simple command such
docker run <<image-name>>
to initiate a container out of that image.
What’s Docker and Its Architecture
Docker’s name comes from the fact that in the early day, a docker was a worker who moved goods on and off the ships, they were known to be so cost-efficient, that they were able to fit items and boxes of different shapes onto the ship, and that’s exactly what docker does with software instead of boxes. So docker is an open-source project made of multiple components that work together to achieve the end goal of running, distributing, and building software. The previously mentioned components are:
- a CLI client program, used to state commands either directly by an end-user or script to the underlying operating system, to run a container, build an image, fetch an image …etc.
- background process (the docker daemon), with an HTTP RESTful API, the daemon would communicate with the CLI client and also with other web services to send and receive images.
- docker registries can be either private or public, the most used public registry is the Docker Hub, the registry is used to store images or fetch locally.
Therefore images are like the building units that docker moves(instead of goods) around the web to either run as a container or be used as a layer into another docker image, containers can be spun from an image and they are a feature of the UNIX like operating systems family, they serve as an isolated runtime environment, where such isolation can affect nearly every resource a process can consume such: as CPU time, memory, disk space, and ports.
Why docker, we have virtual machines
After knowing what docker is, someone might wonder why should I even care to use docker when i can just use a virtual machine(vm). Well docker containers can serve the same purpose of a vm, while being more effiecent, where a vm provides a virtual hardware to run an intire operating system on-top the host’s, whereas containers are simply a runtime enviroment that whould use the host’s kernal to communicate with the underlying hardware. but it does not end there docker have some more benefits like:
- packaging software is one of the main features I started using docker for, so when I was in my second year of my computer science degree, we had to install Oracle DB on our machines for some class, and some of my friends had MacBooks, so they had to choose whether to give up disk space(around 45GB) and use a virtual machine, or simply use docker to run it as a container, and you can tell what I helped them achieve.
- continous delivry is a pipeline that enables software to be rebuild and re-deployed whenever there a new a change is develivered to prosuction with little to no down-time.
- modeling networds ,we can spin dozends or even hondreds of contaibers to mimics a respectable newtwork with real interactive machines, it gives a greate tool to test real world senarios without breaking the bank.
- documenting software dependencies is a must when working with docker because you must list all the dependencies in the DockerFile beforehand, so even if docker wasn’t used in all the phases a comprehensive list of dependencies is still there to be used.