A more advanced Survey List in SharePoint 2013

July 5, 2016 Leave a comment
Hi everyone!

My customer has asked for a solution to evaluate developped applications by the end users. A solution based on Survey template was proposed but the default template is just a form with the questions as fields. The interface is not convivial if we have a long survey!

Let us assume that we have a survey with three questions. The following pictures shows the default NewForm:


Enter a caption

What is the difference between a custom list and a survey then? Some interesting features can be cited:

  • The stats part to evaluate the survey.Surevey12
  • The branching feature in the survey that allows to jump to a given step based on what is chosen by the user at the current step.Surevey13

Now, let us develop a more advanced survey using just javascript! We want to achieve these objectives:

  • View just one question at a given step
  • Two buttons to access to the next or the previous question
  • View the Finish button once the user has finished the survey
  • Confirm the answers before saving

3 libraries will be used for different purposes:

  • SP.js for notifications.
  • SPUtilies.js for hiding and showing the questions.
  • JQuery for SPUtilities.

Simply we will write a custom html file that will be referenced into a Content Editor WebPart added on the page NewForm.apsx of your survey:


Here is the content of the Quiz.html:

<!DOCTYPE html>

<html lang="en" xmlns="http://www.w3.org/1999/xhtml">
    <!-- Javascript files references -->
    <script type="text/javascript" src="/JS/jquery-1.11.2.min.js"></script>
    <script type="text/javascript" src="/JS/sputility.min.js"></script>
    <script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>
    <script type="text/javascript" src="/_layouts/15/sp.js"></script>
<meta charset="utf-8" />
    <!-- A table containing the Next and Previous buttons -->
<table class="ms-rteTable-default" cellspacing="0" style="width: 100px;">
<td  style="width: 100px; height: 20px;">
<div id="Previous"><<<</div></td>
<td  style="width: 100px; height: 20px;">
<div id="Next">>>></div></td>
<script language="javascript" type="text/javascript">
        //A variable to keep the current position in the survey
        var i = 0;
        //An Arry to store the list of the questions fields
        var fieldsArray = [];
        //This functions is invoked when we click on Finish Button
        function PreSaveItem() {
            var fields = SPUtility.GetSPFields();
            var message = 'Are you sure to save these answers ?\n';
            for (fieldName in fields) {
                message = message + fieldName + ' : ' + 
                SPUtility.GetSPField(fieldName).GetValue() + '\n';


            if (confirm(message)) {

                return PreSaveAction(); // allow save form...
            //Notify the user
            function PreSaveAction() {
                SP.UI.Notify.addNotification('Your Answers are being saved...', 
                return true;
            $(document).ready(function () {
                //Hiding the form toolbar
            var fields = SPUtility.GetSPFields();
            for (fieldName in fields) {   
                //Get the questions list
                //Hide all question
                //Show the first question
        //Next handler
            $("#Next").click(function () {
                //Hide the current question
            if (i < fieldsArray.length-1) 
           //Show the form toolbar when we reach the last question                 
            //Show the next question             
            //Previous handler         
           $("#Previous").click(function () 
             //Hide the current question             
           if (i > 0) {
            //Show the previous question


Let us see the results!

Oh yes, the buttons Next and Previous allow you to navigate between the questions:

First Question:


Second Question:


Thrid Question:


Now if we clic on the Next button the Finsih button is shown:


Before saving the answers a confirm window is launched to confirm the answers:


Finally, a notification label dispalys the message of saving the answers:


Ok that’s all! Hope it helps!



Docker Cloud; you can deal with different Cloud Providers through a single API! (part2)

March 17, 2016 Leave a comment

Hi again!

The first part of this series depicted the steps to follow in order to create a Docker container hosting a simple Web Application using the Docker Cloud API.

We assume that the hosting nodes are created and the objective is to develop a 3 tiers application:

  1. Haproxy as a load balancer and reverse proxy hosted on a single container
  2. 4 Web applications hosted on 4 containers
  3. Redis server hosted on a single container to provide cache capabilities

Now let’s start:

We can draw our infratsructure using the StackFiles. The syntax is so easy to understand. Let’s see how to define our application


As you can see, the labels lb, web and redis define the tiers. the Haproxy load balancer is listening on the port 80 and is directly linked to the Web Tier. Four containers that host our Web Application are linked to the Redis cache tier. It s so simple, isn’t it?

Automatically, the services are created.


Using the Stack dashboard, we can start the stack.


But you have just to notice the order in which the containers start, Redis, Web and finally the Haproxy load balancer.


This example is so simple, the default behaviour of the load balancer is based on the Round Robin algorithm without any session affinity.

For each F5, we visit a given container.



Categories: docker Tags:

Docker Cloud; you can deal with different Cloud Providers through a single API! (part1)

March 2, 2016 Leave a comment

Hi again!

How Docker is good, is attractive, is … splendid. Just after the Docker Datacenter, Docker Cloud is the new solution to create cluster nodes on different platforms like Azure, containers and Stacks (in Swarm fashion).

Ok today I will show you some steps for two scenarios on Azure:

  1. A simple Web application hosted in a single container (part1).
  2. A 3 tiers application (part 2) based on:
    1. Haproxy as a load balancer and reverse proxy hosted on a single container
    2. 4 Web applications hosted on 4 containers
    3. Redis server hosted on a single container to provide cache capabilities

Let us start with the firts scenario:

  1. First of all, you import a certificate into the settings of your Azure account to authenticate the Docker Cloud API.
  2. You create a cluster of a given number of nodes. You can resize your cluster. However according to Docker Cloud, the first node is free!!!! Yes. You have just to select the template of your service cloud.


  1. After that, you will notice that a service cloud and the corresponding virtual machine are created.


  1. Ok, now you will create a container that will host the Web Server and the application.



  1. You select the appropriate image. In my case I choose dockercloud/hello-world image. Here you will notice that you can select a deployment strategy when using a cluster of nodes. You can instanciate a given number of the containers from the same image. You can also define a stack file that draws the composed architecture.


  1. On the Ports section you have to publish the default port. In my case I define the default http port. But don’t forget to define an endpoint listening on this port for the virtual machine hosting the containers host.




  1. Now the service is created, we have just to start it.


  1. And a beautiful page is dispalyed from your browser.


Categories: docker, Uncategorized

Linux containers vs Windows containers; another eternal war starts!

February 29, 2016 Leave a comment

Hi again!,

Yes the containers are the future. We can develop virtual machines into containers, containers into virtual machines, virtual appliances into containers, containers into virtual machines into containers, etc. You can’t limit imagination.

Containers are not a new technology. As a Microsoft specialist, I had no information about it before because it came from the Unix/Linux world through mecanisms like namespaces, cgroups and capabilities. Microsoft has decided to integrate this beautiful technology into its new OS Windows Server 2016.

You can develop some containers using the CTP4 and Docker engine.

Containers have the ability to start very fast but the question is : are Windows containers faster or slower than Linux containers?

I took two virtual machines on my Laptop with these configurations:

  1. Windows Server Core 2016 CTP 4. 4GB RAM, 2 virtual CPU. Docker version 1.10.0-dev, build e39c811, experimental
  2. Linux Ubuntu 15.04, 4GB RAM, 2 virtual CPU. Docker version 1.10.0-dev, build 59a341e

A container is created based on the underlying OS on each virtual machine. Let us see the starting and stopping times. I noticed that the Linux containers start faster after the first bootstrapping.

Windows Linux
Starting Time : 11-12s

Stopping Time : 12-13s

1st Starting Time : 2-4s, 2nd ST: 1-2s

Stopping Time : 11-12s

What about a more powerful platform like Azure? I have two virtual machines with the same configurations:

Windows Linux
Starting Time : 3-4s

Stopping Time : 2-4s

1st Starting Time : 2-3s, 2nd ST: 0-0.3s!

Stopping Time : 10-10.5s

As a quick conclusion, we can see that Linux containers are 10 times faster than Windows containers.

Microsoft has to deal with the containers performances and has a lot of work. Let us wait for the release version.

Categories: docker Tags: ,

–net=Host option not recognized under Windows 2016 CTP4

February 26, 2016 Leave a comment

Hi again,

After docker on Linux, I was playing with the new Windows containers using docker engine.

Before starting, I explain quickly my environment; I have deployed a virtual machine based on Windows Server Core CTP4 and I deployed the containers feature on it. By default the script used to provision a container host “Install-ContainerHost.ps1” creates a virtual switch named Virtual Switch using the NAT connection type; finally this feature is available! Before, we was obliged to use the routing services into an other virtual machine for example.


On Linux I was able to create my containers using the same network stack as my host. So I had just to create ports forwarding rules from my host to my container. To achieve this goal we use –net=Host option.

I created a container from the WindowsServerCore image using this command:


However this option is not accepted. How to configure my docker network?

Under Windows, a file named “runDockerDaemon.cmd” located under c:\ProgramData\docker folder defines the default VSwitch to use when creating containers.


As expected, the “Virtual Switch” switch is defined by default. If you want to use an external vswitch, you have just to define it in the runDockerDaemon file.

@echo off
set certs=%ProgramData%\docker\certs.d

if exist %ProgramData%\docker (goto :run)
mkdir %ProgramData%\docker

if exist %certs%\server-cert.pem (if exist %ProgramData%\docker\tag.txt (goto :secure))

docker daemon -D -b “Virtual Switch”
goto :eof

docker daemon -D -b “Virtual Switch” -H –tlsverify –tlscacert=%certs%\ca.pem –tlscert=%certs%\server-cert.pem –tlskey=%certs%\server-key.pem

According to Microsoft, “Each container needs to be attached to a virtual switch in order to communicate over a network. A virtual switch is created with the New-VMSwitch command. Containers support a virtual switch with type External or NAT"

I think that with VSwitches containers manage their own networks stacks. An overhead is created consequently. Under Linux, we can bypass this layer and use the host network stack directly.


Categories: docker Tags: ,

Via docker, I noticed that Mono 4.2.1 is faster than Mono 3.2.8

February 14, 2016 Leave a comment


Hi again,

I am developping a prototype of micrsoservices running into docker containers. These micrsoservices process files operations using WCF. It will be perhaps the topic of future articles incha Ellah.

On my Ubuntu 15.04 machine I have installed Mono 3.2.8 shiped with Monodevelop. I installed docker and the latest mono image from the hub.

To study the overhead of my microservice, the same web service was also executed on my host. A windows application was used to consume the web services.

According to my tests when writing sequentially files, I noticed that my docker webservice responded faster than my webservice on the host.

As the docker container is sharing the same kernel as my host, I consulted the version of the mono installed in the image from which I created my container using this command:

docker exec -it “myDockerName or ID” mono –version

Bingo!!! Mono 4.2.1 was installed into my container. To verify that my hypothesis is correct, I updated my Mono on the host. The results were similar finally!

Thank you docker! And thank you Mono!

Categories: docker, Uncategorized

Certificate validation error can cause broken images in reporting services reports

January 16, 2016 Leave a comment


I am so happy to meet WordPress readers again!

In some reports, images components are configured as external links but using https protocol. The links are working fine using different browsers and the public certificate is validated.

However using external links with http protocol are working correctly while executing reports. After googling the issue, I could find some answers speaking about Configuring the Unattended Execution Account to allow access to external ressources from reports like UNC files.

In my case, no proxy, no authentication are required to access web ressources.

On the reporting server, I opened an exetrnal link in the browser and bingo!!!!; my certificate could not be validated. I was obliged to install the root certificate on my server to have the certificate validated.

After that, the images were displayed correctly finally!

Hope it helps!