Archive for the ‘docker’ Category

Docker Cloud; you can deal with different Cloud Providers through a single API! (part2)

March 17, 2016 Leave a comment

Hi again!

The first part of this series depicted the steps to follow in order to create a Docker container hosting a simple Web Application using the Docker Cloud API.

We assume that the hosting nodes are created and the objective is to develop a 3 tiers application:

  1. Haproxy as a load balancer and reverse proxy hosted on a single container
  2. 4 Web applications hosted on 4 containers
  3. Redis server hosted on a single container to provide cache capabilities

Now let’s start:

We can draw our infratsructure using the StackFiles. The syntax is so easy to understand. Let’s see how to define our application


As you can see, the labels lb, web and redis define the tiers. the Haproxy load balancer is listening on the port 80 and is directly linked to the Web Tier. Four containers that host our Web Application are linked to the Redis cache tier. It s so simple, isn’t it?

Automatically, the services are created.


Using the Stack dashboard, we can start the stack.


But you have just to notice the order in which the containers start, Redis, Web and finally the Haproxy load balancer.


This example is so simple, the default behaviour of the load balancer is based on the Round Robin algorithm without any session affinity.

For each F5, we visit a given container.



Categories: docker Tags:

Docker Cloud; you can deal with different Cloud Providers through a single API! (part1)

March 2, 2016 Leave a comment

Hi again!

How Docker is good, is attractive, is … splendid. Just after the Docker Datacenter, Docker Cloud is the new solution to create cluster nodes on different platforms like Azure, containers and Stacks (in Swarm fashion).

Ok today I will show you some steps for two scenarios on Azure:

  1. A simple Web application hosted in a single container (part1).
  2. A 3 tiers application (part 2) based on:
    1. Haproxy as a load balancer and reverse proxy hosted on a single container
    2. 4 Web applications hosted on 4 containers
    3. Redis server hosted on a single container to provide cache capabilities

Let us start with the firts scenario:

  1. First of all, you import a certificate into the settings of your Azure account to authenticate the Docker Cloud API.
  2. You create a cluster of a given number of nodes. You can resize your cluster. However according to Docker Cloud, the first node is free!!!! Yes. You have just to select the template of your service cloud.


  1. After that, you will notice that a service cloud and the corresponding virtual machine are created.


  1. Ok, now you will create a container that will host the Web Server and the application.



  1. You select the appropriate image. In my case I choose dockercloud/hello-world image. Here you will notice that you can select a deployment strategy when using a cluster of nodes. You can instanciate a given number of the containers from the same image. You can also define a stack file that draws the composed architecture.


  1. On the Ports section you have to publish the default port. In my case I define the default http port. But don’t forget to define an endpoint listening on this port for the virtual machine hosting the containers host.




  1. Now the service is created, we have just to start it.


  1. And a beautiful page is dispalyed from your browser.


Categories: docker, Uncategorized

Linux containers vs Windows containers; another eternal war starts!

February 29, 2016 Leave a comment

Hi again!,

Yes the containers are the future. We can develop virtual machines into containers, containers into virtual machines, virtual appliances into containers, containers into virtual machines into containers, etc. You can’t limit imagination.

Containers are not a new technology. As a Microsoft specialist, I had no information about it before because it came from the Unix/Linux world through mecanisms like namespaces, cgroups and capabilities. Microsoft has decided to integrate this beautiful technology into its new OS Windows Server 2016.

You can develop some containers using the CTP4 and Docker engine.

Containers have the ability to start very fast but the question is : are Windows containers faster or slower than Linux containers?

I took two virtual machines on my Laptop with these configurations:

  1. Windows Server Core 2016 CTP 4. 4GB RAM, 2 virtual CPU. Docker version 1.10.0-dev, build e39c811, experimental
  2. Linux Ubuntu 15.04, 4GB RAM, 2 virtual CPU. Docker version 1.10.0-dev, build 59a341e

A container is created based on the underlying OS on each virtual machine. Let us see the starting and stopping times. I noticed that the Linux containers start faster after the first bootstrapping.

Windows Linux
Starting Time : 11-12s

Stopping Time : 12-13s

1st Starting Time : 2-4s, 2nd ST: 1-2s

Stopping Time : 11-12s

What about a more powerful platform like Azure? I have two virtual machines with the same configurations:

Windows Linux
Starting Time : 3-4s

Stopping Time : 2-4s

1st Starting Time : 2-3s, 2nd ST: 0-0.3s!

Stopping Time : 10-10.5s

As a quick conclusion, we can see that Linux containers are 10 times faster than Windows containers.

Microsoft has to deal with the containers performances and has a lot of work. Let us wait for the release version.

Categories: docker Tags: ,

–net=Host option not recognized under Windows 2016 CTP4

February 26, 2016 Leave a comment

Hi again,

After docker on Linux, I was playing with the new Windows containers using docker engine.

Before starting, I explain quickly my environment; I have deployed a virtual machine based on Windows Server Core CTP4 and I deployed the containers feature on it. By default the script used to provision a container host “Install-ContainerHost.ps1” creates a virtual switch named Virtual Switch using the NAT connection type; finally this feature is available! Before, we was obliged to use the routing services into an other virtual machine for example.


On Linux I was able to create my containers using the same network stack as my host. So I had just to create ports forwarding rules from my host to my container. To achieve this goal we use –net=Host option.

I created a container from the WindowsServerCore image using this command:


However this option is not accepted. How to configure my docker network?

Under Windows, a file named “runDockerDaemon.cmd” located under c:\ProgramData\docker folder defines the default VSwitch to use when creating containers.


As expected, the “Virtual Switch” switch is defined by default. If you want to use an external vswitch, you have just to define it in the runDockerDaemon file.

@echo off
set certs=%ProgramData%\docker\certs.d

if exist %ProgramData%\docker (goto :run)
mkdir %ProgramData%\docker

if exist %certs%\server-cert.pem (if exist %ProgramData%\docker\tag.txt (goto :secure))

docker daemon -D -b “Virtual Switch”
goto :eof

docker daemon -D -b “Virtual Switch” -H –tlsverify –tlscacert=%certs%\ca.pem –tlscert=%certs%\server-cert.pem –tlskey=%certs%\server-key.pem

According to Microsoft, “Each container needs to be attached to a virtual switch in order to communicate over a network. A virtual switch is created with the New-VMSwitch command. Containers support a virtual switch with type External or NAT"

I think that with VSwitches containers manage their own networks stacks. An overhead is created consequently. Under Linux, we can bypass this layer and use the host network stack directly.


Categories: docker Tags: ,

Via docker, I noticed that Mono 4.2.1 is faster than Mono 3.2.8

February 14, 2016 Leave a comment


Hi again,

I am developping a prototype of micrsoservices running into docker containers. These micrsoservices process files operations using WCF. It will be perhaps the topic of future articles incha Ellah.

On my Ubuntu 15.04 machine I have installed Mono 3.2.8 shiped with Monodevelop. I installed docker and the latest mono image from the hub.

To study the overhead of my microservice, the same web service was also executed on my host. A windows application was used to consume the web services.

According to my tests when writing sequentially files, I noticed that my docker webservice responded faster than my webservice on the host.

As the docker container is sharing the same kernel as my host, I consulted the version of the mono installed in the image from which I created my container using this command:

docker exec -it “myDockerName or ID” mono –version

Bingo!!! Mono 4.2.1 was installed into my container. To verify that my hypothesis is correct, I updated my Mono on the host. The results were similar finally!

Thank you docker! And thank you Mono!

Categories: docker, Uncategorized