Getting Ready For Cloud Citizenship…

In this post we will briefly examine what we need to consider when we are looking to prepare our application(s) for Cloud Citizenship, i.e. ensuring they are native to the Cloud.

What necessitates 12 Factor?


The image above somewhere brings us to Pets vs. Cattle story…

We treated App Servers as our pets, kept them along with us (in premise), cared for them, when they were not healthy, we got them treated, added power to them etc.

But actually servers are cattle, today they are cheap, if you need more, go and buy them, if they are not healthy, we kill them, if we have extra, we return them to the market etc.

In Java, App Servers are pet, they are not disposable, on the other hand, microservices are cattle, they are cheap, we can start them quickly, we can bring them down faster etc. Easy to replace, they are decoupled.

What is 12 Factor?

  • A methodology
  • Set of Principles
  • Best Practices based on experience and observations at Heroku

that leads to…….

  • Scalability
  • Maintainability
  • Portability

The mechanism through which these are achieved:

  • Immutability – Infrastructure is immutable.
  • Ephemerality – Application is ephemeral, not persistent, are disposable.
  • Declarativity – Declarative setups, configurations.
  • Automation – as much as we can automate

What are those 12 factors?

Build/Deploy Focused Architecture/Design Focused
Codebase Processes
Dependencies Port Binding
Configuration Concurrency
Backing Services Disposability
Build, Release, Run Dev/Prod Parity
Admin Processes


Let’s take a look at each in detail;

Build/Deploy Factors Detailed…

  • Codebase
    • Should use VCS
    • Most important one repository per application
    • Shared code should be migrated to an applications itself and be treated as library
  • Dependencies
    • Explicitly declared and managed
    • Don’t expect your dependencies will be provided by OS/Container etc.
    • Don’t check in jar files into code repo
  • Configuration
    • Should be separated from code
    • items which are specific to an environment and not to an application
    • Should be made available through environment variables or any other similar mechanism, like our AMC.
    • Litmus test – can we open source our code base without exposing any internal URLs or credentials.
  • Backing Services
    • any service that is communicated with over a network
    • database connections, cache providers, file sharing services like SFTP or Amazon S3, email services
    • are bound by a URL to the remote or local resource identically, are treated the same as the local services and URL is provided by the configuration
    • Consider these as attachable resources
    • Allows swapping out the service in each environment or data center
  • Build, Release, Run
    • Should be executed in 3 discrete steps
    • Build, compiles code and produces executable binary, e.g. a jar file
    • Combine configuration with build output to create a release image per deployment need
    • Release image has everything that an application needs to run
    • Run the application from release image

Architectural/Design Factors Detailed…

  • Processes
    • Should be stateless, when goes down, shouldn’t take anything important down along
    • Memory usage should be single threaded, and short lived.
    • Anything that needs to be stored from operation to operation, needs to leverage a database or a cache.
    • Sticky sessions are not good
    • Cache managers like EHCACHE, keeping state in memory, but still distributing is OK.
  • Port Binding
    • Should be fully self contained, shouldn’t rely on external infrastructure for anything
    • Should expose itself over a port, instead relying on application server to do this for it
    • Each process should have its communication protocols bound to a usually non-standard port, allowing it run in a container in an isolated fashion.
  • Concurrency
    • JVM has some great concurrency libraries (Java.util.concurrent, RxJava, etc.), but they are for scaling up
    • To scale out, diversify work load, break tasks into applications to do single job, e.g. web request handler, backend job, schedule job etc.
    • Microservices helps here
  • Disposability
    • Quick to startup, well within 60 secs. Refactor application to get there
    • Graceful shutdown, within 10 secs of receiving TERM signal, should release resources, clean itself up and goes down gracefully
    • Resilient to failure, if it shutdown gracefully and come up quickly, it can be called resilient to failure
    • App servers are pets, microservices are cattle, they are disposable
  • Dev/Prod Parity
    • Dev environment should be identical to PROD environment and every environment in between (staging, QA, UAT etc.)
    • Parity leads to reproducibility and reproducibility paves way towards disposability
  • Logs
    • Log messages are critical for operations in helping troubleshoot issues
    • Treat logs as an event data stream
    • Application writes its logs to standard out in the form of a stream
    • Each application shares the same stream
    • The logs can then be aggregated to another system like ELK for archival and reporting
    • Standardizing logging output (as a JSON message) across all of applications, makes this aggregation easier
  • Admin Processes
    • Admin tasks should be run as an isolated processes
    • Task shouldn’t be built in the application
    • Should be migrated and managed as an application

Docker, make scripting distributed systems easy – III

In this post we will be exploring Dockerfile. What it contains, how its gets utilized for building images, good practices to keep it succinct and performant and some nuances of it.

So let’s write our very simple Dockerfile with following content in it.

FROM busybox

MAINTAINER Vikash Pandey <>

RUN echo "From within Docker file"

CMD echo "Hello from Docker!"

and to build docker image out of this Dockerfile, we run:

docker build -t 01-dockerfile .  –from the folder where we have the Dockerfile

We are saying to docker, please build an image, give it a tag ’01-dockerfile’ from the Dockerfile found in the current directory (note the dot .).

[root@ 01]# docker build -t 01-dockerfile .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM busybox
 ---> 6ad733544a63
Step 2/4 : MAINTAINER Vikash Pandey <>
 ---> Running in 6e3b2921c55d
 ---> d1f04b381d40
Removing intermediate container 6e3b2921c55d
Step 3/4 : RUN echo "From within Docker file"
 ---> Running in 89d5984eb16f
From within Docker file
 ---> 08ce93cfaf9d
Removing intermediate container 89d5984eb16f
Step 4/4 : CMD echo "Hello from Docker!"
 ---> Running in 1f0ba8c27c99
 ---> 6fe05a552b66
Removing intermediate container 1f0ba8c27c99
Successfully built 6fe05a552b66
Successfully tagged 01-dockerfile:latest
[root@ 01]#

The result says few things, let’s explore what it is saying-

It explains what is being done step wise.

In step 1, it created an from busybox image.

In step 2, its executing MAINTAINER statement, to run any statement, docker creates containers from just the previous image. Note Running in…. Once it has done with the statement, it evaluates whether intermedatory container is required further down, if not, it removes it.

In step 3, its executing RUN statement, the actual command to run is an echo unix command. Again to run something, it needs a container, note Running in…. again. It removes intermediate container if not required further down.

In step 4, it executing CMD statement.

And finally it builds an image 6fe05a552b66 and tags it with provide tag value with -t.

When we run a container from our image 01-dockerfile:latest, we get this:

[root@ ~]# docker run 01-dockerfile:latest
Hello from Docker!
[root@ ~]#

The container is executing the command we provide with CMD, what happened to command that we provided with RUN? CMD and ENTRYPOINT are two statements through which we tell the image what to do when it runs as a container. RUN and other statement (we will see some) are used during build phase, adding layers to the images with the results of those statements.

So what is a Dockerfile? Dockerfiles are small programs designed to describe how to build a Docker image. When docker build finishes against a given Dockerfile, the resulting image will be stored in the local Docker registry

Each step produces a new image. It’s got a series of steps. Start with one image, make a container out of it, run something in it, make a new image. The previous image is unchanged; it start from that, make a new one with some changes in it, triggered by respective statement(s).

Processes we start on one line will not be running on the next line. We run them, they run for the duration of that container, then that container gets shut down, saved into an image and we have a fresh start on the next line. So we can’t treat it like a shell script and say start a program on one line, then send a message to that program on the next line. The program won’t be running. If you need to have one program start and then another program start, those two operations need to be on the same line so that they run in the same container.

Each step of running a Dockerfile is cached. As we know that the later steps don’t modify the previous step. That means that the next time we run our build, if nothing changed, it doesn’t have to rerun that step. Docker can skip lines that weren’t changed since the last time we built this Dockerfile.

Point to note 1: Put the parts of our code that we change the most at the end of our Dockerfile. That way the parts before them don’t need to be redone every time you change that part.  Hence saving time during build.

Now let’s make our Dockerfile little interesting:

FROM debian:sid

MAINTAINER Vikash Pandey <>

RUN mkdir -p /etc/apt/apt.conf.d && cd /etc/apt/apt.conf.d && echo 'Acquire::http::proxy "";' >> 99proxy \
&& apt-get update && apt-get install -y nano && rm -rf /var/lib/apt/list/*

CMD ["/bin/nano", "/tmp/notes.txt"]

I have ‘mkdir -p /etc/apt/apt.conf.d && cd /etc/apt/apt.conf.d && echo ‘Acquire::http::proxy “;;’ >> 99proxy’ only because i am behind proxy, if you are not, skip this part.

So basically what we are aiming here, build a docker image that will start from debian:sid as base, will update libraries and install nano editor. After that it will run command nano to create a new file named notes.txt under /tmp folder of the container.

This is how the build proceed-

[root@ 02]# docker build -t 02-dockerfile .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM debian:sid
sid: Pulling from library/debian
a8797cd0c76e: Pull complete
Digest: sha256:c7769191d696c5206540d871099d63757e96210b8678b5f0a4761569191919d4
Status: Downloaded newer image for debian:sid
 ---> 3bf719402098
Step 2/4 : MAINTAINER Vikash Pandey <>
 ---> Running in 46d35bf9ff6d
 ---> 9954dc9f9f37
Removing intermediate container 46d35bf9ff6d
Step 3/4 : RUN mkdir -p /etc/apt/apt.conf.d && cd /etc/apt/apt.conf.d && echo 'Acquire::http::proxy "";' >> 99proxy && apt-get update && apt-get install -y nano && rm -rf /var/lib/apt/list/*
 ---> Running in 3bf8dfa2a091
Get:1 sid InRelease [240 kB]
Get:2 sid/main amd64 Packages [10.5 MB]
Fetched 10.8 MB in 6s (1696 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
 lsb-base sensible-utils
Use 'apt autoremove' to remove them.
Suggested packages:
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.
Need to get 516 kB of archives.
After this operation, 2134 kB of additional disk space will be used.
Get:1 sid/main amd64 nano amd64 2.9.2-1 [516 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 516 kB in 1s (262 kB/s)
Selecting previously unselected package nano.
(Reading database ... 6553 files and directories currently installed.)
Preparing to unpack .../nano_2.9.2-1_amd64.deb ...
Unpacking nano (2.9.2-1) ...
Setting up nano (2.9.2-1) ...
update-alternatives: using /bin/nano to provide /usr/bin/editor (editor) in auto mode
update-alternatives: using /bin/nano to provide /usr/bin/pico (pico) in auto mode
 ---> 763c0a7ece5b
Removing intermediate container 3bf8dfa2a091
Step 4/4 : CMD nano /tmp/notes.txt
 ---> Running in fc2e844e4385
 ---> 04662db74d8c
Removing intermediate container fc2e844e4385
Successfully built 04662db74d8c
Successfully tagged 02-dockerfile:latest

I had to edit my Dockerfile and after editing when i ran, see the usage of cache that docker utilized to its advantage.

[root@ 02]# vi Dockerfile
[root@ 02]# docker build -t 02-dockerfile .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM debian:sid
 ---> 3bf719402098
Step 2/4 : MAINTAINER Vikash Pandey <>
 ---> Using cache
 ---> 9954dc9f9f37
Step 3/4 : RUN mkdir -p /etc/apt/apt.conf.d && cd /etc/apt/apt.conf.d && echo 'Acquire::http::proxy "";' >> 99proxy && apt-get update && apt-get install -y nano && rm -rf /var/lib/apt/list/*
 ---> Using cache
 ---> 763c0a7ece5b
Step 4/4 : CMD nano /notes.txt
 ---> Running in b6959cc43da8
 ---> bd94fff70f96
Removing intermediate container b6959cc43da8
Successfully built bd94fff70f96
Successfully tagged 02-dockerfile:latest

When we run it with docker run -ti 02-dockerfile:latest. We get nano opening up notes.txt and saves it under /tmp folder. Great, our container offering editor is ready!

There is one small issue though, each time the file name is fixed, notes.txt. Lets see how we can use ENTRYPOINT statement to get there.

FROM debian:sid

MAINTAINER Vikash Pandey <>

RUN mkdir -p /etc/apt/apt.conf.d && cd /etc/apt/apt.conf.d && echo 'Acquire::http::proxy "";' >> 99proxy \
&& apt-get update && apt-get install -y nano && rm -rf /var/lib/apt/list/*

#CMD ["/bin/nano", "/tmp/notes.txt"]
ENTRYPOINT ["/bin/nano"]

and after building the image run it like docker run -ti 02-dockerfile:latest /tmp/notes5

The difference between CMD and ENTRYPOINT is that CMD is THE command to run upon container startup whereas ENTRYPOINT appends command to itself,  passed during container startup. With docker run -ti 02-dockerfile:latest /tmp/notes5 the command becomes /bin/nano /temp/notes5.

Point to note 2: If we have CMD in our Dockerfile and we passed command during container run, it gets overwritten by the passed in command, but with ENTRYPOINT, it gets appended to it.

Lets inherit and extend our own docker image 02-dockerfile:latest that we just built.

FROM 02-dockerfile:latest

MAINTAINER Vikash Pandey <>

COPY notes.txt /tmp/

ADD /tmp/

CMD ["/bin/nano", "/tmp/notes.txt", "/tmp/underscore.js"]

In this case we are starting from our image 02-dockerfile:latest and copying a local file notes.txt into container under /tmp. Also we are reaching out over URL to pull and copy its content in container under /tmp. COPY is to copy local files, ADD is to COPY plus can pull content from URL, can copy tarball and untar at destination path in container.

When we notice CMD in previous Dockerfile we see we are providing referring to files with full path. We can use WORKDIR to set our current working directory in container, like:

FROM 02-dockerfile:latest

COPY notes.txt /tmp/
ADD /tmp/
CMD ["/bin/nano", "notes.txt", "underscore.js"]

Until we change the WORKDIR value it continues to be set for the remainder of the Dockerfile.

EXPOSE statement maps port inside a container.

RUN is to run given command(s) through a shell. RUN, CMD and ENTRYPOINT can take commands in shell for as well exec form. nano notes.txt is an example of shell form and “/bin/nano”, “notes.txt” is an example of exec form.

ENV is to set an environment variable both fore Dockerfile and resulting image and receive values from command line with either -e or –env while running a container.

FROM 02-dockerfile:latest

COPY notes.txt /tmp/

ADD /tmp/



CMD /bin/nano notes.txt underscore.js ${NEW_FILE_PATH}

When we run the image build out it lik-

 docker run -ti 003-dockerfile:latest

You get nano opening notes.txt, then underscore.js with its respective content and then empty notes file under /tmp folder to created with typed in content.

You can also run the container as

 docker run -ti -e NEW_FILE_PATH=/opt/myfiles.txt 003-dockerfile:latest

You get nano opening notes.txt, then underscore.js with its respective content and then empty myfiles.txt file under /opt folder to created with typed in content. Here we saw how we can set ENV in Dockerfile and provide value at container execution.

VOLUME is to defines shared volumes or ephemeral volumes, depending on whether we have one or two arguments. If we have two arguments, it maps a host path into a container path. if we have one, it creates a volume that can be inherited by later containers. We should avoid using shared folders with the host in a Dockerfile, because it means that this Dockerfile will only work on our computer, and we’ll probably want to share it around, or at least run it on a different computer. 

One last point before i close this post, each statement in a Dockerfile needs an intermediate container to run that statement, hence its recommended to make it as succinct as possible, so that we can improvise the image build. Lets look at RUN command we have seen in this post –

RUN mkdir -p /etc/apt/apt.conf.d && cd /etc/apt/apt.conf.d && echo 'Acquire::http::proxy "";' >> 99proxy \
&& apt-get update && apt-get install -y nano && rm -rf /var/lib/apt/list/*

This RUN could be written like –

RUN mkdir -p /etc/apt/apt.conf.d

RUN cd /etc/apt/apt.conf.d

RUN echo 'Acquire::http::proxy "";' >> 99proxy 

RUN apt-get update 

RUN apt-get install -y nano 

RUN rm -rf /var/lib/apt/list/*

Try both these versions and watch building of image. The later is not recommended approach. && to sequence shell commands together one after the other and ‘\’ is used to say that all these are part of single RUN statement but written on seperate lines.

Further reading on Dockerfile

In upcoming posts we will explore docker-compose and some more topics……



Docker, make scripting distributed systems easy – II

In this post we will be exploring docker feature

  • volumes and
  • network

As we say that containers are usually immutable and ephemeral, meaning we should only re-deploy containers and avoid changes. What happens to the data that containers work with. This data lifetime is tied to containers lifetime, container removal sweeps the data out.

Docker provides concept of volumes to persist data beyond container lifetime. There are 2 ways to achieve it, volumes (named or otherwise), bind mounts. We can even have volumes that we may call like ephemeral.

The volumes way makes special location outside of container UFS (union file system).

docker run -d --name volume-nginx -p 8060:80 -v nginx-data:/usr/share/nginx/html nginx

then http://host-running-container-name/IP:8060



Edit the file being inside the container, i am behind proxy hence i had to do few extra steps (may help in case you happen to be behind proxy)-

docker exec -ti volume-nginx bash

cd /etc/apt/apt.conf.d/

echo 'Acquire::http::proxy "";' >> 99proxy

apt-get update

apt-get install -y vim

vim /usr/share/nginx/html/index.html --> make desired changes.

Here is my changed index.html output:



Now lets remove this container and run another container, making it use the volume where to be removed container wrote its index.html editing.

 docker rm -f volume-nginx

 docker run -d --name volume-nginx1 -p 8070:80 -v nginx-data:/usr/share/nginx/html nginx

Here is what i see when i hit 8070 port-



My new container running on 8070 and is using data that other container exposed on 8060 wrote.

Bind mounts link host path to container path, basically making 2 locations pointing to the same file. Its usage has been explained with an example in Part-I

Bind mounts are very useful during development, changes on host reflecting in container.

We do have ephemeral volumes as well, –volumes-from allows to share volume among running containers, and this volume last till its last container using it exists.
For example:

docker run -ti --rm --name volume-creator -v /shared-space ubuntu:14.04 bash
cd /shared-space and create files: echo "data1" > myfile1

By this we have got a container that has created a volume /shared-space that other containers are going to use with –volumes-from option.

Create another container as

docker run -ti --rm --name volume-cons1 --volumes-from volume-creator ubuntu:14.04 bash
cd /shared-space
#add few more files with 
echo "data22" > myfile2

We see the file from creator container.

Now kill the creator container, and you still see shared-space in volume-cons1 container

Let’s create another container

docker run -ti --rm --name volume-cons11 --volumes-from volume-cons1 ubuntu:14.04 bash
cd /shared-space 

and we see both files.

Kill all these containers and the shared volume is gone. See images below for execution of above mentioned commands-


This image shows the count of volumes getting changed as volume got created and reduces by 1, when all consumers using this ephemeral volume cease to exist.



What happens when a container is created from networking point of view, from where does it get its IP, which network it attaches itself to, how does inter container communication happens and do we have an opportunity to get into it and configure things per our need. Lets try to look inside with the help of an example,

Create 2 containers like:

docker run --name web-default-net-1 -d httpd
docker run --name web-default-net-2 -d httpd
docker exec -ti web-default-net-2 bash
root@ac87a0ab4ec5:/usr/local/apache2# ping web-default-net-1
ping: unknown host

We see that both these container don’t recognize each other even if they are on the same virtual network, identified as bridge.

docker inspect --format="IP: {{.NetworkSettings.Networks.bridge.IPAddress}} Gateway: {{.NetworkSettings.Networks.bridge.Gateway}}" web-default-net-2 web-default-net-1
IP: Gateway:
IP: Gateway:

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet netmask broadcast
 inet6 fe80::42:6eff:fef4:2f67 prefixlen 64 scopeid 0x20<link>
 ether 02:42:6e:f4:2f:67 txqueuelen 0 (Ethernet)
 RX packets 4390968 bytes 1161893124 (1.0 GiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 6258711 bytes 1128943454 (1.0 GiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

We will come back to how we solve this situation, but before that lets create a custom virtual network

docker network create custom-net
docker inspect custom-net
 "Name": "custom-net",
 "Id": "15065a5451dc9f604c1b57bcc8f33a4446835d9876376ddae6710c7dbff6f25e",
 "Created": "2018-01-02T09:19:05.112114846+05:30",
 "Scope": "local",
 "Driver": "bridge",
 "EnableIPv6": false,
 "IPAM": {
 "Driver": "default",
 "Options": {},
 "Config": [
 "Subnet": "",
 "Gateway": ""
 "Internal": false,
 "Attachable": false,
 "Ingress": false,
 "Containers": {},
 "Options": {},
 "Labels": {}
br-15065a5451dc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet netmask broadcast
 inet6 fe80::42:cfff:fe95:850e prefixlen 64 scopeid 0x20<link>
 ether 02:42:cf:95:85:0e txqueuelen 0 (Ethernet)
 RX packets 9 bytes 645 (645.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 17 bytes 1376 (1.3 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

And create containers and connect those to this custom virtual network.

docker run --name web-custom-net-1 -d --net=custom-net httpd
docker run --name web-custom-net-2 -d --net=custom-net httpd
docker inspect web-custom-net-2 
 "Networks": {
 "custom-net": {
 "IPAMConfig": null,
 "Links": null,
 "Aliases": [
 "NetworkID": "15065a5451dc9f604c1b57bcc8f33a4446835d9876376ddae6710c7dbff6f25e",
 "EndpointID": "50fb78946c0c4d3cb4016f6926eed7e6fe9c9136111afd777419db91762f975d",
 "Gateway": "",
 "IPAddress": "",
 "IPPrefixLen": 16,
 "IPv6Gateway": "",
 "GlobalIPv6Address": "",
 "GlobalIPv6PrefixLen": 0,
 "MacAddress": "02:42:ac:14:00:03"

if we bash into any of these containers and try to ping the other:

docker exec -ti web-custom-net-2 bash
root@d676e77b392b:/usr/local/apache2# ping web-custom-net-1
PING web-custom-net-1 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.143 ms
64 bytes from icmp_seq=1 ttl=64 time=0.147 ms
64 bytes from icmp_seq=2 ttl=64 time=0.110 ms
64 bytes from icmp_seq=3 ttl=64 time=0.107 ms
64 bytes from icmp_seq=4 ttl=64 time=0.111 ms
64 bytes from icmp_seq=5 ttl=64 time=0.131 ms
64 bytes from icmp_seq=6 ttl=64 time=0.140 ms
64 bytes from icmp_seq=7 ttl=64 time=0.163 ms
^C--- web-custom-net-1 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.107/0.132/0.163/0.000 ms

They know one another. How does it happen?

It happens because of built in DNS server, that uses container name as equivalent of host name, hence name of the container has its own importance. Built-in DNS server doesn’t come default with bridge virtual network, hence we couldn’t make 1st 2 containers talk to each other.
–link is the work around to enable DNS between containers on default bridge virtual network.
Docker Compose by default creates a virtual network for application we are spinning out of it and takes care of DNS resolution of containers it created, without need of –link.

Now lets try –link workaround to resolve the situation in 1st use case.

docker run --name web-default-net-3 -d --link web-default-net-1 httpd
docker exec -ti web-default-net-3 bash
root@5e2762afc8ca:/usr/local/apache2# ping web-default-net-1
PING web-default-net-1 ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.419 ms
64 bytes from icmp_seq=1 ttl=64 time=0.127 ms
64 bytes from icmp_seq=2 ttl=64 time=0.160 ms
64 bytes from icmp_seq=3 ttl=64 time=0.123 ms
64 bytes from icmp_seq=4 ttl=64 time=0.166 ms
64 bytes from icmp_seq=5 ttl=64 time=0.120 ms
64 bytes from icmp_seq=6 ttl=64 time=0.131 ms
64 bytes from icmp_seq=7 ttl=64 time=0.109 ms
^C--- web-default-net-1 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.109/0.169/0.419/0.096 ms

Containers can be attached and detached after creation also by

docker network connect custom-net web-default-net-2

docker network disconnect custom-net web-default-net-2


Next we will be talking about Dockerfile and docker-compose………..







Docker, make scripting distributed systems easy – I

In this series of posts, i will share my experience working with docker as a developer and will expand few topics a bit more like,

  • images metadata and how can we get to it
  • ephemeral volumes, bind mounts
  • network concepts used inside
  • Dockerfile, docker-compose …. and more……

Staring with what is Docker? Docker is 2 program, client and server. The server receives commands from the client over a socket, either over a network or through a file called socket file. On a host where docker is installed, we can find the socket file at /var/run/docker.sock.

With this information, lets try to run docker client in a docker container where client sends command to docker server through docker.sock file.

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock docker sh

What this command does? It’s going to:

  • Look for an image named ‘docker’ on hosts local docker repository, if found, will use the local copy, else will pull the image from docker hub repository.
  • Then it will map /var/run/docker.sock file on host into container at /var/run/docker.sock, so that updates are reflected on both sides.
  • Will give an interactive terminal inside the container and run ‘sh’ command in it.

After execution of this command, we are into a container that has docker client inside it and we can run docker from within-

docker run -ti ubuntu bash

This command is simpler than earlier one and it runs ubuntu latest image, giving an interactive terminal with bash running inside.

By now we are running docker client from within a container itself.


Let’s talk a bit about images, images are created out of a Dockerfile, that lists out steps to create an image. Each step in a Dockerfile adds a layer on top of previous step’s image, by running the previous image as intermediate container and executing the current step on top. Docker may decide to remove intermediate images as it finds fit in cases. But where does images get stored on host machine. On a Centos 7 host its at:

ls -l /var/lib/docker/image/devicemapper/imagedb/content/sha256/
total 3620
-rw-------. 1 root root 13712 Dec 29 13:05 0008e3a6103746ec4f302fffc13fb796e461b71add7209366f8ab9ad46622f77
-rw-------. 1 root root 8825 Dec 29 13:05 0046e7a2b0932bd0e99467b32401d80d8d3ea5f7a33b2acd44f47372d2e3872f
-rw-------. 1 root root 3615 Jan 1 10:22 00fd29ccc6f167fa991580690a00e844664cb2381c74cd14d539e36ca014f043
-rw-------. 1 root root 9357 Dec 29 13:05 021af8ef946e34a20dc2cdc06a82edfbd426249ee2c9d2f6dcd707c23a132aaa
-rw-------. 1 root root 7574 Dec 29 13:05 0232177273551cd33a469aeace543931598e028a1bf4b4591cc5d3dfeba5af64
-rw-------. 1 root root 811 Dec 29 13:05 02424f5e7e451ea699a4d8058d733f51d78658cd0fd86b07645cf158bfccc0ad
-rw-------. 1 root root 8112 Dec 29 13:05 02be064043ed0cf60bc3d572ced06159cbc4805766df624f9b4d2405a844d89a
-rw-------. 1 root root 3241 Dec 29 13:05 037fbf47952e2cfc291a23b19b0e665df1fa924b06f47e4d6eb2f1a1d459909b
-rw-------. 1 root root 1577 Dec 29 13:05 0388af444d5ac9b30c56e14f669ef917da437d316026f494d31bca315daa95e4
-rw-------. 1 root root 2804 Dec 29 13:05 039f1bb3922f20162d1f2e43dc308a21fb975eed0990f31fedd0cc19b4e335ab
-rw-------. 1 root root 7363 Dec 29 13:05 03d3db4469c289f4fd7fd626bcd01dc6fbd12d1ea0f8c1f2ade84f89523c3685
-rw-------. 1 root root 4149 Dec 29 13:05 04cf91413004c1d92387ee8d652e9c29c4448c0c26c9c9acf74f356a4261f2a9
-rw-------. 1 root root 4887 Dec 29 13:05 04ded2d551766603331838fdb689988e2b257a7ff7ea41ab4652e43afa977379
-rw-------. 1 root root 1194 Dec 29 13:05 05138b69f83fb7ebeac66ee84e7c7ca937edb2e3ae24ec55b3d5b167af2ef6ce
-rw-------. 1 root root 1286 Dec 29 13:05 058fafbdf5523cf24cc19b2dc46e611dff716af281a4d54745a7ec74d7b6a0a1
-rw-------. 1 root root 3978 Dec 29 13:05 05f608c6041e4f45a90734cd0c7d0bd081944f30470b6ed4fdc417f523db23f7
-rw-------. 1 root root 8592 Dec 29 13:05 0615533b88143b1b8f449a4d01ca339ebf02242d3a41d74f9140fabf176f5ce2
-rw-------. 1 root root 5807 Dec 29 13:05 0717bf27b9de19ad493026775f04e113fbc23bc1f966f6a1637c01560c5ecddf
-rw-------. 1 root root 9502 Dec 29 13:05 084085ef3ff7c1711fb984793696926842219401aa6a018b62b3a89d51a45dea
-rw-------. 1 root root 1149 Dec 29 13:05 084d63991302ebe404105920913a7ed851cf012e5b0f3e9c2b6a9fb6cf10214c
-rw-------. 1 root root 5962 Dec 29 13:05 0a928172a05ca4f8185b095e6a28877f7f68dbc55886323fef3b8353b65d3c97
-rw-------. 1 root root 1863 Dec 29 13:05 0aec253eb94e71d72336480e3408177ce67968d4ea1dcfabfe4f0d9e5f85ad70

At /var/lib/docker/image/devicemapper/, we see a repositories.json that stores image related data as json, extract is shown below:


The folder /var/lib/docker/ stores all information about containers, images, networks and volumes.

  • To list all images we use – docker images
  • To remove an image we use – docker rmi <image-name/id>
  • To force remove an image that has a container – docker rmi -f <image-name/id>

When a container is running, the container consumer would need to do few things:

  • See all running containers with docker ps
  • See all container running or stopped with docker ps -a
  • Look at the log of the container with docker logs <container-name/id>, running container log with docker logs -f <container-name/id>
  • Look into metadata of the container with docker inspect <container-name/id>. We can even use inspect to look into image metadata too. Inspect output provides many more details along with IP Address of the container, its volume mounts, binds, network configuration etc.
  • Move containers across virtual network with docker network connect <network-name> <container-name> to join the virtual network and docker network disconnect <network-name> <container-name> to leave a virtual network. Network comes very handy when we need a seamless communication across containers and it helps logically putting all our containers within a same virtual network, easing loy of communication needs.
  • Stop a running container with docker stop <container-name/id>
  • Removing a container with docker rm <container-name/id>, if container is running we can force removal with docker rm -f <container-name/id> 

If we want to see how is container doing –

docker stats <container-name/id>
postgres2 0.00% 40.11MiB / 7.64GiB 0.51% 10.1kB / 6.81kB 29MB / 718kB 0


What all processes are running inside a container-

docker top <container-name/id>
polkitd 24389 24368 0 2017 ? 00:00:02 postgres
polkitd 24437 24389 0 2017 ? 00:00:00 postgres: checkpointer process
polkitd 24438 24389 0 2017 ? 00:00:03 postgres: writer process
polkitd 24439 24389 0 2017 ? 00:00:03 postgres: wal writer process
polkitd 24440 24389 0 2017 ? 00:00:02 postgres: autovacuum launcher process
polkitd 24441 24389 0 2017 ? 00:00:05 postgres: stats collector process

If we want to get a shell inside a running container-

docker exec -ti <container-name/id> <command to run>
[vikash.pandey]# docker exec -ti postgres2 echo hellp postgres
hellp postgres


Let’s take a very simple example, where we will be using nginx image to run a container. We will run the container in such a way that we edit index.html on host machine and we see the real time changes in application using browser.

Make a directory and create an index.html inside it. From within that directory run-

docker run -d --name livenginx -p 8050:80 -v $(pwd):/usr/share/nginx/html nginx


Once this is done hit http://<your docker host IP/NAME>:8050


Keep editing index.html on host and see the changes by refreshing the page.


Another change:


Lets decode ‘docker run -d –name livenginx -p 8050:80 -v $(pwd):/usr/share/nginx/html nginx‘.

-d = run this container as detached, we can say like a daemon/service etc.

–name = giving a convenient name to this container

-p <host port>:<container port> = map host port to container port, i.e. all incoming request on port 8050 on host will be redirected to container on port 80 of the container. If we miss the pairing part of host port, docker finds and assigns a unique port on host to the container mapped port. In case you wish to specify the protocol (default its tcp), we can write it like <host port>:<container port>/udp

-v <host path>:<container path>: map host path into container path, so that each side’s edit gets reflected to one another. This is called bind mount of volume and is very very useful during development.

Volumes help persist container data else containers are ephemeral. More on volumes in upcoming post.

Before i conclude this one, when running a container, we may like to control resource allocation to it.

docker run -d -ti --cpu-shares 20 busybox sh

docker inspect --format="Memory: {{ .HostConfig.Memory}} CPUShares: {{ .HostConfig.CpuShares}}" kind_mcclintock
Memory: 0 CPUShares: 20

docker run -d -ti --cpu-shares 20 --memory 1GB busybox sh

 docker inspect --format="Memory: {{ .HostConfig.Memory}} CPUShares: {{ .HostConfig.CpuShares}}" confident_davinci
Memory: 1.073741824e+09 CPUShares: 20

If we don’t name our containers, docker by default generates convenient names for these containers. Container name has its own importance and we will talk about it when we will discuss about inter container communication and networking.

Apart from many other value that containers bring to us, these are very convenient and useful when we want to learn any tool like drupal, wordpress etc and it takes away multitude of steps that we would be doing if we go our traditional software installation way. It removes the complex installation and deployment step that often becomes a barrier to get started on something. It enables us to do things much more efficiently, test things at multiple platforms etc, like running a script on various Linux variants without the need to install these variants, just run a container with desired Linux variant and tear it down as you wish.

Stay tuned for volumes and network in part II………



SCRUM helps reducing Procrastination

While exploring, what procrastination is, it is “the practice of carrying out less urgent tasks in preference to more urgent ones, or doing more pleasurable things in place of less pleasurable ones, and thus putting off impending tasks to a later time”. And why do we procrastinate?

Came across various reasons, causes to procrastinate, here are few ones:

  • Lack of confidence
  • Easy Distraction
  • Feeling overwhelmed
  • Blocked creativity
  • Disliking the task

And then looked up to SCRUM or Agility for offerings to resolve procrastination.

Let’s look at, lack of confidence. The procrastinators set really high standards for them and that causes lack of confidence. Is my work going to meet the standard, won’t it expose me that I am not good at this skill/activity etc., and that tend to make us hold on to the task longer or keep pushing it for some other day. SCRUM’s principle of keep producing results at shorter interval and keep collecting feedback on it, goes hand in hand with the mantra for procrastinators to handle this cause, “production before perfection”. Act on it, produce results, get it reviewed and keep moving towards perfection, a contextual perfection, accepted by many (all stakeholders) than just you.

Second one, feeling overwhelmed, a big chunk of task, too many things to consider before acting, meaning too many reasons/risks to keep it on hold. SCRUM’s principle of breaking down tasks (the overwhelmed looking ones) into epics, stories, that can be thought through relatively quickly in a smaller context, could encourage to see more clearer picture, reduce overwhelmedness and enable one to start on task or activity at hand than keep procrastinating it. Setting up interim deadlines is another way to resolve feeling overwhelmed, producing and reviewing results at interim deadlines, confirms whether we are doing it right or need a correction. Timeboxed sprints, demos to stakeholders, retrospections are those events that helps us break the overwhelmed syndrome.

Third one, blocked creativity, often true, when we keep trying within ourselves. You want to get that work done. You’re sick of having it hang over you, but you’re out of good ideas. You’re working alone on a task and looking for a creative idea, maybe run your ideas by SCRUM team and see if that sparks the creativity you’re searching for. SCRUM encourages communication among SCRUM team and all stakeholders. Sharing your ideas to the team could be just that one thing.

Disliking the task, may be starting small on that very task with a sense of belongingness, contribution to the bigger cause that SCRUM team is set to, partnering in success with the team, learning on the go, could change the perception towards that very task/activity. We may find an exciting and efficient way to work on that very task. We may also realize the value of that task

Agility, adaptability, commitment and openness could help reduce procrastination.

Integration Testing Angular Applications – Part I

Continuing from my previous post on testing Angular application, Unit Testing Angular Application, this post is exploring integration testing approach for following features:

  • Component having property and event binding,
  • Directive,
  • Pipe

Component having property and event binding

Let’s look at what we have in this component, its usage and then we will see our integration test code.

//TS file

import { Component, Input, Output, EventEmitter } from '@angular/core';

selector: 'app-voter',
templateUrl: './voter.component.html',
styleUrls: ['./voter.component.css']
export class VoterComponent {
@Input() othersVote = 0;
@Input() myVote = 0;

@Output() vote = new EventEmitter();

upVote() {
if (this.myVote == 1)

this.myVote++;{ myVote: this.myVote });

downVote() {
if (this.myVote == -1)

this.myVote--;{ myVote: this.myVote });

get totalVotes() {
return this.othersVote + this.myVote;


<!-- template file -->
<div class="voter">
class="glyphicon glyphicon-menu-up vote-button"
[class.highlighted]="myVote == 1"

<span class="vote-count">{{ totalVotes }}</span>

class="glyphicon glyphicon-menu-down vote-button"
[class.highlighted]="myVote == -1"

We are going to test following test cases –

  • should render total votes counter
  • should highlight upvote button when upVoted
  • should increase totalVotes when upvote button is clicked
  • should decrease totalVotes when downvote button is clicked

This is what we have in our test:

import { By } from '@angular/platform-browser';
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { VoterComponent } from './voter.component';

describe('VoterComponent', () => {
let component: VoterComponent;
let fixture: ComponentFixture<VoterComponent>;

beforeEach(() => {
declarations: [ VoterComponent ]
fixture = TestBed.createComponent(VoterComponent);
component = fixture.componentInstance;

it('should render total votes counter', () => {
component.othersVote = 20;
component.myVote = 1;


let de = fixture.debugElement.query(By.css('.vote-count'));
let el: HTMLElement = de.nativeElement;

it('should highlight upvote button when upVoted', () => {
component.myVote = 1;


let de = fixture.debugElement.query(By.css('.glyphicon-menu-up'));


it('should increase totalVotes when upvote button is clicked', () => {
let button = fixture.debugElement.query(By.css('.glyphicon-menu-up'));

button.triggerEventHandler('click', null);


it('should decrease totalVotes when downvote button is clicked', () => {
let button = fixture.debugElement.query(By.css('.glyphicon-menu-down'));

button.triggerEventHandler('click', null);


Few major differences from our Unit testing approach is that here we are not new’ing the component, we are using TestBed and configuring testing module as if it’s simulating our regular application module of the application, we use fixture and component and simulate HTML events, like button click etc, working directly with HTML elements of the template to act and expect.


Our directive is called HighlightDirective with a defaultColor and another color that can be set by consuming component via property binding , lets look at its code:

import { Directive, Input, ElementRef, OnChanges } from '@angular/core';

selector: '[highlight]'
export class HighlightDirective implements OnChanges {
defaultColor = 'pink';
@Input('highlight') bgColor: string;

constructor(private el: ElementRef) {

ngOnChanges() { = this.bgColor || this.defaultColor;

We will be creating the component that will use this directive with following template:

template: `
<p highlight="lightblue">First</p>
<p highlight>Second</p>

class DirectiveHostComponent {

So we are going to test following test cases –

  • should highlight 1st para with directives bgColor color
  • should highlight 2nd para with default background color
  • should set directives bgColor color with lightblue

Here is what we write in test spec file:

import { async, ComponentFixture, TestBed } from '@angular/core/testing';
import { HighlightDirective } from './highlight.directive';
import { By } from '@angular/platform-browser';
import { Component } from '@angular/core';

//Important to create component here so that we can apply directive to its template
//elements and test the effect.
template: `
<p highlight="lightblue">First</p>
<p highlight>Second</p>

class DirectiveHostComponent {

describe('HighlightDirective', () => {
let fixture: ComponentFixture<DirectiveHostComponent>;

beforeEach(() => {
declarations: [ DirectiveHostComponent, HighlightDirective ]
fixture = TestBed.createComponent(DirectiveHostComponent);

it('should highlight 1st para with directives bgColor color', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[0]; //get 1st para element

let directive = de.injector.get(HighlightDirective);

it('should highlight 2nd para with default background color', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[1]; //get 2nd para element

let directive = de.injector.get(HighlightDirective);

it('should set directives bgColor color with lightblue', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[0];//get 1st para element

let directive = de.injector.get(HighlightDirective);

To reduce dependency and keeping test clean we created the component that uses the directive in test spec file itself.


Our pipe is going to transform provided text on which it is applied into TitleCase, lets see its code:

import { Pipe, PipeTransform } from '@angular/core';

name: 'titlecase'
export class TitlecasePipe implements PipeTransform {

transform(input: any, args?: any): any {
if (typeof input !== 'string') {
throw new Error('Requires a String as input');
return input.length === 0 ? '' :
input.replace(/\w\S*/g, (txt => txt[0].toUpperCase() + txt.substr(1).toLowerCase() ));


The usage of the pipe:

<span>{{ title | titlecase }}</span>

The component that is using it should have test code as shown below:

describe('UserDetailsComponent', () => {
let component: UserDetailsComponent;
let fixture: ComponentFixture<UserDetailsComponent>;

beforeEach(() => {

imports: [],
declarations: [UserDetailsComponent, TitlecasePipe],
providers: [
fixture = TestBed.createComponent(UserDetailsComponent);
component = fixture.componentInstance;


it('should convert title name to Title Case', () => {
const inputName = 'quick BROWN fox';
const titleCaseName = 'Quick Brown Fox';
let titleDisplay = fixture.debugElement.query(By.css('span')).nativeElement;
let titleInput = fixture.debugElement.query(By.css('input')).nativeElement;

// simulate user entering new name into the input box
titleInput.value = inputName;

// dispatch a DOM event so that Angular learns of input value change.
let evnt = document.createEvent('CustomEvent');
evnt.initCustomEvent('input', false, false, null);

// Tell Angular to update the output span through the title pipe


And we can unit test the pipe by writing below code:

import { TitlecasePipe } from './titlecase.pipe';

describe('TitlecasePipe', () => {
const pipe = new TitlecasePipe();
it('create an instance', () => {
it('should work with empty string', () => {

it('should titlecase given string input', () => {

it('should throw error with invalid values', () => {
//must use arrow function for expect to capture exception
expect(()=>pipe.transform(9)).toThrowError('Requires a String as input');

A point worth noting, when we create component, directive etc with ng generate utility, we see 2 copies of beforeEach as shown below:

beforeEach(async(() => {
declarations: [<<YourComponent>>]

beforeEach(() => {
fixture = TestBed.createComponent(<<YourComponent>>);
component = fixture.componentInstance;

Note the async version, we may safely remove that copy, because with @angular/cli, webpack is our default builder and packaging tool and webpack complies and provides inline template, so we are not required to reach out to file system asynchronously and compile it separately. Because of this reason, you see async version of beforeEach finds a miss it all the provided test spec codes in this post.


Great to see us writing clean, maintainable and well tested code! In part II of this post we will be exploring integration testing approach for Services and Routes.

Unit Testing Angular Applications

As and when we write an application, testing is one of most fundamental activity that we as developers are expected to do. It has lot of benefits, like it ensures we write quality code, maintainable code, while write test cases we identify coupling among application components and get an opportunity to re-look at our design with an aim to clear not required coupling and enhance code making it more maintainable and reduce unnecessary dependencies.

In this post we are going to start exploring how we would be building our unit test for following scenarios:

  • very basic function
  • testing strings and arrays
  • testing a simple class
  • testing a class having angular form in it
  • testing a service
  • testing a component that emits event

I will expand it to cover integration test where we will be exploring how to write test cases for most of the above mentioned scenarios in integration with Angular framework, where we will test routers, services, components simulating user interactions, in my upcoming post on this topic.

Unit Testing a basic function

Lets suppose we have a function named compute taking a number and increment it, if passed in value is >= zero.


export function compute(number) {
if (number &lt; 0)
return 0;
return number + 1;

In that very same folder create a file with suffix ‘spec.ts’, assume the function is written in compute.ts, create compute.spec.ts and add the code shown below:

import { compute } from './compute';
describe('compute', () =&gt; {
it('should return 0 when called with negative numbers', () =&gt; {
let result = compute(-1);
it('should increment by 1 when called with non negative numbers', () =&gt; {
const parameter = 1;
let result = compute(parameter);
expect(result).toBe(parameter + 1);

In our learning we are using @angular-cli as tool that is using karma and jasmine. This is all taken care and created for you when you create a new Angular application using @angular-cli with command:

ng new<<your-app-name>>

then you change directory to newly created application and run

ng test

This ensures the karma test engine is running and responding to changes you do to your test file(s).

You would see something like this in your console, where you ran ‘ng test’ –


ng test also launches web interface at http://localhost:9876/, go to this URL, click on DEBUG button, open browser console with (F12), here you see how your tests are performing.



A bit about karma/jasmine, describe is a function using which we write our test suite and inside it, using it function we write our test cases. You configure karma in karma.conf.js file available in your application, a quick looks confirms why we see karma’s web interface on port 9876.

An excerpt from karma.conf.js –

angularCli: {
config: './angular-cli.json',
environment: 'dev'
reporters: config.angularCli &amp;&amp; config.angularCli.codeCoverage
? ['progress', 'karma-remap-istanbul']
: ['progress'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'],
singleRun: false


Unit Testing strings and arrays

This is our code that we would like to write test for:


export function greet(name) {
return 'Welcome ' + name;


export function getCurrencies() {
return ['USD', 'AUD', 'EUR', 'INR'];

Lets look at the test:


import { greet } from './greet';

describe('greet', () =&gt; {
it('should contain passed param in the message', () =&gt; {
const parameter = 'Vikash';


import {getCurrencies } from './getCurrencies';

describe('getCurrencies', () =&gt; {
it('should return supported currencies', () =&gt; {
const currencies = getCurrencies();

Its almost very similar to our earlier test that we wrote for compute function. Please note that we could have got our greet test passed even with toBe(‘Welcome ‘ + parameter), but this makes our test very fragile and it could break easily if we change the static text part in our greet function, like from ‘Welcome’ to ‘Hello’ or ‘Hola’. toContains protect us from that fragility and is sufficiently good to cover what we need in our test. We would want to test that the data we are passing as parameter is part of the returned message from greet function.


Unit Testing a simple class

Here comes our very very simple class-

export class UserResponseComponent {
totalLikes = 0;
like() {
disLike() {

And here is our test file:

import { UserResponseComponent } from './user.response.component';

describe('UserResponseComponent', () =&gt; {
let userRespComp = null;
beforeEach(() =&gt; {
userRespComp = new UserResponseComponent();
it('should increment the totlaLikes counter by 1 when liked', () =&gt; {
it('should increment the totlaLikes counter by 1 when disliked', () =&gt; {

Few things to note here:

We need to create an instance of this class, so that we can access it’s methods. Where should we create that instance, we could have created in each of the it function, that would go again our DRY (Don’t Repeat Yourself) principle.

The need is to create the instance for each test case, jasmine offers beforeEach function for this purpose only. The code inside beforeEach will executed before each of it function call.

We call our activities inside before each most commonly as Arrange, and inside each function we Act and Assert. There is afterEach also that we can use to tear down the setup we did in beforeEach.

Don’t forget to keep going back to your console and browser console to see how are your tests performing:)

By now we are little confident on testing framework at use and be ready to take on some complex ones. Lets look at testing a class that uses Angular Form.

Unit Testing a class having angular form in it

Here is how our class looks like –

import { FormBuilder, Validators, FormGroup } from '@angular/forms';

export class TodoFormComponent {
form: FormGroup;

constructor(fb: FormBuilder) {
this.form ={
name: ['', Validators.required],
email: [''],

And the test code –

import { FormBuilder } from '@angular/forms';
import { TodoFormComponent } from './todo-form.component';

describe('TodoFormComponent', () =&gt; {
var component: TodoFormComponent;

beforeEach(() =&gt; {
component = new TodoFormComponent(new FormBuilder());

it('should create form with 2 controls', () =&gt; {

it('should make name control as required when empty value is set', () =&gt; {
let control = component.form.get('name');
let value = '';
it('should make name control as required when null value is set', () =&gt; {
let control = component.form.get('name');
let value = null;
it('should pass required validation when a valid value is set', () =&gt; {
let control = component.form.get('name');
let value = 'Vikash';

In these tests we are ensuring that our form gets created with desired number of controls and we test to see based on value given to name form control whether it’s validator working for us or not.

How about a class that emits an event? Here comes our class that emits event –

import { EventEmitter } from '@angular/core';

export class UserResponseComponent {
totalLikes = 0;
likeChanged = new EventEmitter();

upLike() {

Here is our test code –

import { UserResponseComponent } from './user.response.component';

describe('UserResponsesComponent', () => {
var component: UserResponseComponent;

beforeEach(() => {
component = new UserResponseComponent();

it('should raise likeChanged event when upLiked', () => {
let totalLikes = null;
component.likeChanged.subscribe(tl => totalLikes = tl)

Something to note here is that Events are Observables and during arrange phase of our test we are subscribing to it, so that once it gets emitted we set data received with event to our component so that we can use this fact during assertion.

Finally test a service and we are done for this long post:).

This is how our service is looking –

import { Http } from '@angular/http';
import 'rxjs/add/operator/map';

export class TodoService {
constructor(private http: Http) {
add(todo) {
return'...', todo).map(r => r.json());

getTodos() {
return this.http.get('...').map(r => r.json());

delete(id) {
return this.http.delete('...').map(r => r.json());


The component using this service:

import { TodoService } from './todo.service'

export class TodosComponent {
todos: any[] = [];

constructor(private service: TodoService) {}

ngOnInit() {
this.service.getTodos().subscribe(t => this.todos = t);

add() {
var newTodo = { title: '... ' };
t => this.todos.push(t),
err => this.message = err);

delete(id) {
if (confirm('Are you sure?'))


And the test code –

import { TodosComponent } from './todos.component';
import { TodoService } from './todo.service';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/from';
import 'rxjs/add/observable/empty';
import 'rxjs/add/observable/throw';
//import * as _ from 'lodash';

describe('TodosComponent', () => {
let component: TodosComponent;
let service: TodoService;

beforeEach(() => {

service = new TodoService(null); //manoeuvring with null to avoid http object creation and setup
component = new TodosComponent(service);

it('should set todos to the value returned by server via todo service', () => {
//here wee are spuing on method getTodos of TodoService, callFake takes the function
//it's faking on. We are getting control over the function we are faking
let todos = [1, 2, 3];
spyOn(service, 'getTodos').and.callFake(() => {
return Observable.from([todos]);


expect(component.todos).toBe(todos); //more specific assertion

it('should call the server and save the new todo given to it', () => {
//here wee are spying on method getTodos of TodoService, callFake takes the function
//it's faking on. We are getting control over the function we are faking
let spy = spyOn(service, 'add').and.callFake(todo => {
return Observable.empty();



it('should add the todo returned from service add method', () => {
//here wee are spying on method add of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
let todo = { id: 1 };
let spy = spyOn(service, 'add').and.returnValue(Observable.from([todo]));




it('should set message to error message from server', () => {
//here wee are spying on method add of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
let error = "error from server";
let spy = spyOn(service, 'add').and.returnValue(Observable.throw(error));



it('should call delete method of service when user confirms the window confirm popup', () => {
//here wee are spying on method delete of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
spyOn(window, 'confirm').and.returnValue(true);
let spy = spyOn(service, 'delete').and.returnValue(Observable.empty());



it('should NOT call delete method of service when user confirms the window confirm popup', () => {
//here wee are spying on method delete of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
spyOn(window, 'confirm').and.returnValue(false);
let spy = spyOn(service, 'delete').and.returnValue(Observable.empty());




Here we are using spyOn function to spy on the function of class (that we provide as 2nd and 1st parameter) and then can tweak its functionality, either using callFake or returnValue functions.


Thanks for being so long with me and this post, great to see you writing clean, maintainable and well tested code!