Announcing the newest Docker Meetup Group in Kisumu, Kenya!

I joined  LakeHub about 11 months ago at a particularly interesting time. I was just beginning to run up against the limits of our monolithic, centralized application and needed to start the transition to a portfolio of microservices at Chamaconekt Kenya. This has been an interesting experience: we built, deployed, and run to this day micro-services.

One of the most useful things I learned in all this was that many of the things we were building had a very simple concept at their heart: Docker containers. Containers have been around almost as long as computers and are at the heart of many distributed systems, microservices and real-time application architectures.

A few months using Docker has inspired me because it’s a cost saver and increases efficiency in the software delivery process. Because of the inspiration I get from developers within the Lake Victoria region, I believe containers can help us to continue building this Kenyan economy at significant strides. I decided to create a Docker Kisumu Meetup Group in collaboration with Docker Inc. to help drive this agenda within the region with LakeHub side by side.

It is with great pleasure that we announce our new Docker Meetup Group in Kisumu Kenya. It’s the sixth in Africa after others in Cairo, Cape Town, Casablanca, Johannesburg and Nairobi.

dockerksm

 

 

 

 

 

We are about Open Source · Cloud Computing ·Virtualization · PaaS (Platform as a Service) ·SaaS (Software as a Service) · DevOps · IaaS (Infrastructure as a Service) · OpenStack ·Continuous Delivery ·Docker

This group will help you meet other developers and ops engineers using Docker. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

Lots of developers started using docker in the past two years where they had an app and wanted to put it into production and at scale. The meetups that will be held will help you to get started with containers and how to orchestrate them. We will have speakers who contribute to the docker opensource projects, Docker Inc. employees and also developers and engineers who are using tools in the docker ecosystem locally.

We expect to start our series of meetups in January 2016 and our major strategy will be to visit local Universities and organize talks and hacks as a community within your respective campuses. Monthly meetups will also be held at LakeHub for the developers in the community to get updated on the current issues. We want to be the catalyst of students, developers, engineers and system administrators to build great products and contribute to opensource projects which are building greater technology.

The goal of this post is to give you various ideas as to what is happening in the Docker ecosystem with high and low level overviews.

The rest of this talk will introduce the tools in the docker ecosystem which we will focus on greater depth in our meetups

 

  1. Docker: the container engine

Docker is an open source project to pack, ship and run any application as a lightweight container.

docker-logo-compressed (1)

 

Docker containers are both hardware-agnostic and platform-agnostic. This means they can run anywhere, from your laptop to the largest cloud compute instance and everything in between – and they don’t require you to use a particular language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases, and backend services without depending on a particular stack or provider.

Docker began as an open-source implementation of the deployment engine which powers dotCloud, a popular Platform-as-a-Service. It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands of applications and databases.

Better than VMs

A common method for distributing applications and sandboxing their execution is to use virtual machines, or VMs. Typical VM formats are VMware’s vmdk, Oracle VirtualBox’s vdi, and Amazon EC2’s ami. In theory these formats should allow every developer to automatically package their application into a “machine” for easy distribution and deployment. In practice, that almost never happens, for a few reasons:

  • Size: VMs are very large which makes them impractical to store and transfer.
  • Performance: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
  • Portability: competing VM environments don’t play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
  • Hardware-centric: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most: building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.

 

By contrast, Docker relies on a different sandboxing method known as containerization. Unlike traditional virtualization, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvzvserver and more recently lxc, Solaris with zones, and FreeBSD with Jails.

Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves all four problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead, they are completely portable, and are designed from the ground up with an application-centric design.

Perhaps best of all, because Docker operates at the OS level, it can still be run inside a VM!

Plays well with others

Docker does not require you to buy into a particular programming language, framework, packaging system, or configuration language.

Is your application a Unix process? Does it use files, tcp connections, environment variables, standard Unix streams and command-line arguments as inputs and outputs? Then Docker can run it.

Can your application’s build be expressed as a sequence of such commands? Then Docker can build it.

Escape dependency hell

A common problem for developers is the difficulty of managing all their application’s dependencies in a simple and automated way.

This is usually difficult for several reasons:

  • Cross-platform dependencies. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules, internal components developed for another project, etc. These dependencies live in different “worlds” and require different tools – these tools typically don’t work well with each other, requiring awkward custom integrations.
  • Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease – but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
  • Custom dependencies. A developer may need to prepare a custom version of their application’s dependency. Some packaging systems can handle custom versions of a dependency, others can’t – and all of them handle it differently.

Docker solves the problem of dependency hell by giving the developer a simple way to express all their application’s dependencies in one place, while streamlining the process of assembling them. If this makes you think of XKCD 927, don’t worry. Docker doesn’t replace your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.

Docker defines a build as running a sequence of Unix commands, one after the other, in the same container. Build commands modify the contents of the container (usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous commands, the order in which the commands are executed expresses dependencies.

Under the hood

Under the hood, Docker is built on the following components:

 

 

  1. Swarm: a Docker-native clustering system

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host.

logo

Swarm serves the standard Docker API, so any tool which already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts: Dokku, Compose, Krane, Flynn, Deis, DockerUI, Shipyard, Drone, Jenkins… and, of course, the Docker client itself.

Like other Docker projects, Swarm follows the “batteries included but removable” principle. It ships with a set of simple scheduling backends out of the box, and as initial development settles, an API will be developed to enable pluggable backends. The goal is to provide a smooth out-of-the-box experience for simple use cases, and allow swapping in more powerful backends, like Mesos, for large scale production deployments.

 

 

 

 

  1. Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration. Compose is great for development, testing, and staging environments, as well as CI workflows.logo (1)

Using Compose is basically a three-step process.

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment:
  3. Lastly, run docker-compose up and Compose will start and run your entire app.

 

Compose has commands for managing the whole lifecycle of your application:

  • Start, stop and rebuild services
  • View the status of running services
  • Stream the log output of running services
  • Run a one-off command on a service

 

Common Use Cases for Compose

Compose can be used in many different ways. Some common use cases are outlined below.

a)Development environments

When you’re developing software, the ability to run an application in an isolated environment and interact with it is crucial. The Compose command line tool can be used to create the environment and interact with it.

The Compose file provides a way to document and configure all of the application’s service dependencies (databases, queues, caches, web service APIs, etc). Using the Compose command line tool you can create and start one or more containers for each dependency with a single command (docker-compose up).

Together, these features provide a convenient way for developers to get started on a project. Compose can reduce a multi-page “developer getting started guide” to a single machine readable Compose file and a few commands.

b) Automated testing environments

An important part of any Continuous Deployment or Continuous Integration process is the automated test suite. Automated end-to-end testing requires an environment in which to run tests. Compose provides a convenient way to create and destroy isolated testing environments for your test suite. By defining the full environment in a Compose file you can create and destroy these environments in just a few commands.

 

c) Single host deployments

Compose has traditionally been focused on development and testing workflows, but with each release we’re making progress on more production-oriented features. You can use Compose to deploy to a remote Docker Engine. The Docker Engine may be a single instance provisioned with Docker Machine or an entire Docker Swarm cluster.

 

The End

If you made it this far you and have interest in our meetups, join us at http://www.meetup.com/Docker-Kisumu/ . You can also follow us on Twitter and Facebook.

[mk_social_networks skin=”dark” margin=”4″ align=”left” facebook=”https://www.facebook.com/DockerKisumu/” twitter=”https://twitter.com/DockerKsm” ]

 

island_1

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

or

Log in with your credentials

Forgot your details?