Getting Ready For Cloud Citizenship…

In this post we will briefly examine what we need to consider when we are looking to prepare our application(s) for Cloud Citizenship, i.e. ensuring they are native to the Cloud.

What necessitates 12 Factor?

CN1

The image above somewhere brings us to Pets vs. Cattle story…

We treated App Servers as our pets, kept them along with us (in premise), cared for them, when they were not healthy, we got them treated, added power to them etc.

But actually servers are cattle, today they are cheap, if you need more, go and buy them, if they are not healthy, we kill them, if we have extra, we return them to the market etc.

In Java, App Servers are pet, they are not disposable, on the other hand, microservices are cattle, they are cheap, we can start them quickly, we can bring them down faster etc. Easy to replace, they are decoupled.

What is 12 Factor?

  • A methodology
  • Set of Principles
  • Best Practices based on experience and observations at Heroku

that leads to…….

  • Scalability
  • Maintainability
  • Portability

The mechanism through which these are achieved:

  • Immutability – Infrastructure is immutable.
  • Ephemerality – Application is ephemeral, not persistent, are disposable.
  • Declarativity – Declarative setups, configurations.
  • Automation – as much as we can automate

What are those 12 factors?

Build/Deploy Focused Architecture/Design Focused
Codebase Processes
Dependencies Port Binding
Configuration Concurrency
Backing Services Disposability
Build, Release, Run Dev/Prod Parity
Logs
Admin Processes

 

Let’s take a look at each in detail;

Build/Deploy Factors Detailed…

  • Codebase
    • Should use VCS
    • Most important one repository per application
    • Shared code should be migrated to an applications itself and be treated as library
  • Dependencies
    • Explicitly declared and managed
    • Don’t expect your dependencies will be provided by OS/Container etc.
    • Don’t check in jar files into code repo
  • Configuration
    • Should be separated from code
    • items which are specific to an environment and not to an application
    • Should be made available through environment variables or any other similar mechanism, like our AMC.
    • Litmus test – can we open source our code base without exposing any internal URLs or credentials.
  • Backing Services
    • any service that is communicated with over a network
    • database connections, cache providers, file sharing services like SFTP or Amazon S3, email services
    • are bound by a URL to the remote or local resource identically, are treated the same as the local services and URL is provided by the configuration
    • Consider these as attachable resources
    • Allows swapping out the service in each environment or data center
  • Build, Release, Run
    • Should be executed in 3 discrete steps
    • Build, compiles code and produces executable binary, e.g. a jar file
    • Combine configuration with build output to create a release image per deployment need
    • Release image has everything that an application needs to run
    • Run the application from release image

Architectural/Design Factors Detailed…

  • Processes
    • Should be stateless, when goes down, shouldn’t take anything important down along
    • Memory usage should be single threaded, and short lived.
    • Anything that needs to be stored from operation to operation, needs to leverage a database or a cache.
    • Sticky sessions are not good
    • Cache managers like EHCACHE, keeping state in memory, but still distributing is OK.
  • Port Binding
    • Should be fully self contained, shouldn’t rely on external infrastructure for anything
    • Should expose itself over a port, instead relying on application server to do this for it
    • Each process should have its communication protocols bound to a usually non-standard port, allowing it run in a container in an isolated fashion.
  • Concurrency
    • JVM has some great concurrency libraries (Java.util.concurrent, RxJava, Java.util.stream etc.), but they are for scaling up
    • To scale out, diversify work load, break tasks into applications to do single job, e.g. web request handler, backend job, schedule job etc.
    • Microservices helps here
  • Disposability
    • Quick to startup, well within 60 secs. Refactor application to get there
    • Graceful shutdown, within 10 secs of receiving TERM signal, should release resources, clean itself up and goes down gracefully
    • Resilient to failure, if it shutdown gracefully and come up quickly, it can be called resilient to failure
    • App servers are pets, microservices are cattle, they are disposable
  • Dev/Prod Parity
    • Dev environment should be identical to PROD environment and every environment in between (staging, QA, UAT etc.)
    • Parity leads to reproducibility and reproducibility paves way towards disposability
  • Logs
    • Log messages are critical for operations in helping troubleshoot issues
    • Treat logs as an event data stream
    • Application writes its logs to standard out in the form of a stream
    • Each application shares the same stream
    • The logs can then be aggregated to another system like ELK for archival and reporting
    • Standardizing logging output (as a JSON message) across all of applications, makes this aggregation easier
  • Admin Processes
    • Admin tasks should be run as an isolated processes
    • Task shouldn’t be built in the application
    • Should be migrated and managed as an application
Advertisements

Docker, make scripting distributed systems easy – II

In this post we will be exploring docker feature

  • volumes and
  • network

As we say that containers are usually immutable and ephemeral, meaning we should only re-deploy containers and avoid changes. What happens to the data that containers work with. This data lifetime is tied to containers lifetime, container removal sweeps the data out.

Docker provides concept of volumes to persist data beyond container lifetime. There are 2 ways to achieve it, volumes (named or otherwise), bind mounts. We can even have volumes that we may call like ephemeral.

The volumes way makes special location outside of container UFS (union file system).

docker run -d --name volume-nginx -p 8060:80 -v nginx-data:/usr/share/nginx/html nginx

then http://host-running-container-name/IP:8060

 

Capture6

Edit the file being inside the container, i am behind proxy hence i had to do few extra steps (may help in case you happen to be behind proxy)-

docker exec -ti volume-nginx bash

cd /etc/apt/apt.conf.d/

echo 'Acquire::http::proxy "http://172.17.0.1:3128";' >> 99proxy

apt-get update

apt-get install -y vim

vim /usr/share/nginx/html/index.html --> make desired changes.

Here is my changed index.html output:

Capture7

 

Now lets remove this container and run another container, making it use the volume where to be removed container wrote its index.html editing.

 docker rm -f volume-nginx

 docker run -d --name volume-nginx1 -p 8070:80 -v nginx-data:/usr/share/nginx/html nginx

Here is what i see when i hit 8070 port-

Capture10

 

My new container running on 8070 and is using data that other container exposed on 8060 wrote.

Bind mounts link host path to container path, basically making 2 locations pointing to the same file. Its usage has been explained with an example in Part-I

Bind mounts are very useful during development, changes on host reflecting in container.

We do have ephemeral volumes as well, –volumes-from allows to share volume among running containers, and this volume last till its last container using it exists.
For example:

docker run -ti --rm --name volume-creator -v /shared-space ubuntu:14.04 bash
cd /shared-space and create files: echo "data1" > myfile1

By this we have got a container that has created a volume /shared-space that other containers are going to use with –volumes-from option.

Create another container as

docker run -ti --rm --name volume-cons1 --volumes-from volume-creator ubuntu:14.04 bash
cd /shared-space
ls
#add few more files with 
echo "data22" > myfile2

We see the file from creator container.

Now kill the creator container, and you still see shared-space in volume-cons1 container

Let’s create another container

docker run -ti --rm --name volume-cons11 --volumes-from volume-cons1 ubuntu:14.04 bash
cd /shared-space 
ls

and we see both files.

Kill all these containers and the shared volume is gone. See images below for execution of above mentioned commands-

Capture11.JPG

This image shows the count of volumes getting changed as volume got created and reduces by 1, when all consumers using this ephemeral volume cease to exist.

Capture12.JPG

 

What happens when a container is created from networking point of view, from where does it get its IP, which network it attaches itself to, how does inter container communication happens and do we have an opportunity to get into it and configure things per our need. Lets try to look inside with the help of an example,

Create 2 containers like:

docker run --name web-default-net-1 -d httpd
870518b1d492e6193d50f8d5cf178c3fcfe28cb6a36da11525a43ea794cbd306
docker run --name web-default-net-2 -d httpd
ac87a0ab4ec5cf4fbd631fc12294802cdf9fc2be5fa98e5f1330645624877d21
docker exec -ti web-default-net-2 bash
root@ac87a0ab4ec5:/usr/local/apache2# ping web-default-net-1
ping: unknown host
root@ac87a0ab4ec5:/usr/local/apache2#

We see that both these container don’t recognize each other even if they are on the same virtual network, identified as bridge.

docker inspect --format="IP: {{.NetworkSettings.Networks.bridge.IPAddress}} Gateway: {{.NetworkSettings.Networks.bridge.Gateway}}" web-default-net-2 web-default-net-1
IP: 172.17.0.6 Gateway: 172.17.0.1
IP: 172.17.0.4 Gateway: 172.17.0.1

ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
 inet6 fe80::42:6eff:fef4:2f67 prefixlen 64 scopeid 0x20<link>
 ether 02:42:6e:f4:2f:67 txqueuelen 0 (Ethernet)
 RX packets 4390968 bytes 1161893124 (1.0 GiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 6258711 bytes 1128943454 (1.0 GiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

We will come back to how we solve this situation, but before that lets create a custom virtual network

docker network create custom-net
15065a5451dc9f604c1b57bcc8f33a4446835d9876376ddae6710c7dbff6f25e
docker inspect custom-net
[
 {
 "Name": "custom-net",
 "Id": "15065a5451dc9f604c1b57bcc8f33a4446835d9876376ddae6710c7dbff6f25e",
 "Created": "2018-01-02T09:19:05.112114846+05:30",
 "Scope": "local",
 "Driver": "bridge",
 "EnableIPv6": false,
 "IPAM": {
 "Driver": "default",
 "Options": {},
 "Config": [
 {
 "Subnet": "172.20.0.0/16",
 "Gateway": "172.20.0.1"
 }
 ]
 },
 "Internal": false,
 "Attachable": false,
 "Ingress": false,
 "Containers": {},
 "Options": {},
 "Labels": {}
 }
]
ifconfig
br-15065a5451dc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 172.20.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
 inet6 fe80::42:cfff:fe95:850e prefixlen 64 scopeid 0x20<link>
 ether 02:42:cf:95:85:0e txqueuelen 0 (Ethernet)
 RX packets 9 bytes 645 (645.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 17 bytes 1376 (1.3 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

And create containers and connect those to this custom virtual network.

docker run --name web-custom-net-1 -d --net=custom-net httpd
docker run --name web-custom-net-2 -d --net=custom-net httpd
docker inspect web-custom-net-2 
 "Networks": {
 "custom-net": {
 "IPAMConfig": null,
 "Links": null,
 "Aliases": [
 "d676e77b392b"
 ],
 "NetworkID": "15065a5451dc9f604c1b57bcc8f33a4446835d9876376ddae6710c7dbff6f25e",
 "EndpointID": "50fb78946c0c4d3cb4016f6926eed7e6fe9c9136111afd777419db91762f975d",
 "Gateway": "172.20.0.1",
 "IPAddress": "172.20.0.3",
 "IPPrefixLen": 16,
 "IPv6Gateway": "",
 "GlobalIPv6Address": "",
 "GlobalIPv6PrefixLen": 0,
 "MacAddress": "02:42:ac:14:00:03"
 }
 }

if we bash into any of these containers and try to ping the other:

docker exec -ti web-custom-net-2 bash
root@d676e77b392b:/usr/local/apache2# ping web-custom-net-1
PING web-custom-net-1 (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.143 ms
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.147 ms
64 bytes from 172.20.0.2: icmp_seq=2 ttl=64 time=0.110 ms
64 bytes from 172.20.0.2: icmp_seq=3 ttl=64 time=0.107 ms
64 bytes from 172.20.0.2: icmp_seq=4 ttl=64 time=0.111 ms
64 bytes from 172.20.0.2: icmp_seq=5 ttl=64 time=0.131 ms
64 bytes from 172.20.0.2: icmp_seq=6 ttl=64 time=0.140 ms
64 bytes from 172.20.0.2: icmp_seq=7 ttl=64 time=0.163 ms
^C--- web-custom-net-1 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.107/0.132/0.163/0.000 ms
root@d676e77b392b:/usr/local/apache2#

They know one another. How does it happen?

It happens because of built in DNS server, that uses container name as equivalent of host name, hence name of the container has its own importance. Built-in DNS server doesn’t come default with bridge virtual network, hence we couldn’t make 1st 2 containers talk to each other.
–link is the work around to enable DNS between containers on default bridge virtual network.
Docker Compose by default creates a virtual network for application we are spinning out of it and takes care of DNS resolution of containers it created, without need of –link.

Now lets try –link workaround to resolve the situation in 1st use case.

docker run --name web-default-net-3 -d --link web-default-net-1 httpd
5e2762afc8cae3441307478a5376a359e65dbddc084bfb397093df0adeedfc69
docker exec -ti web-default-net-3 bash
root@5e2762afc8ca:/usr/local/apache2# ping web-default-net-1
PING web-default-net-1 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: icmp_seq=0 ttl=64 time=0.419 ms
64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.127 ms
64 bytes from 172.17.0.4: icmp_seq=2 ttl=64 time=0.160 ms
64 bytes from 172.17.0.4: icmp_seq=3 ttl=64 time=0.123 ms
64 bytes from 172.17.0.4: icmp_seq=4 ttl=64 time=0.166 ms
64 bytes from 172.17.0.4: icmp_seq=5 ttl=64 time=0.120 ms
64 bytes from 172.17.0.4: icmp_seq=6 ttl=64 time=0.131 ms
64 bytes from 172.17.0.4: icmp_seq=7 ttl=64 time=0.109 ms
^C--- web-default-net-1 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.109/0.169/0.419/0.096 ms
root@5e2762afc8ca:/usr/local/apache2#

Containers can be attached and detached after creation also by

docker network connect custom-net web-default-net-2

docker network disconnect custom-net web-default-net-2

 

Next we will be talking about Dockerfile and docker-compose………..

 

 

 

 

 

 

Docker, make scripting distributed systems easy – I

In this series of posts, i will share my experience working with docker as a developer and will expand few topics a bit more like,

  • images metadata and how can we get to it
  • ephemeral volumes, bind mounts
  • network concepts used inside
  • Dockerfile, docker-compose …. and more……

Staring with what is Docker? Docker is 2 program, client and server. The server receives commands from the client over a socket, either over a network or through a file called socket file. On a host where docker is installed, we can find the socket file at /var/run/docker.sock.

With this information, lets try to run docker client in a docker container where client sends command to docker server through docker.sock file.

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock docker sh

What this command does? It’s going to:

  • Look for an image named ‘docker’ on hosts local docker repository, if found, will use the local copy, else will pull the image from docker hub repository.
  • Then it will map /var/run/docker.sock file on host into container at /var/run/docker.sock, so that updates are reflected on both sides.
  • Will give an interactive terminal inside the container and run ‘sh’ command in it.

After execution of this command, we are into a container that has docker client inside it and we can run docker from within-

docker run -ti ubuntu bash

This command is simpler than earlier one and it runs ubuntu latest image, giving an interactive terminal with bash running inside.

By now we are running docker client from within a container itself.

Capture1

Let’s talk a bit about images, images are created out of a Dockerfile, that lists out steps to create an image. Each step in a Dockerfile adds a layer on top of previous step’s image, by running the previous image as intermediate container and executing the current step on top. Docker may decide to remove intermediate images as it finds fit in cases. But where does images get stored on host machine. On a Centos 7 host its at:

ls -l /var/lib/docker/image/devicemapper/imagedb/content/sha256/
total 3620
-rw-------. 1 root root 13712 Dec 29 13:05 0008e3a6103746ec4f302fffc13fb796e461b71add7209366f8ab9ad46622f77
-rw-------. 1 root root 8825 Dec 29 13:05 0046e7a2b0932bd0e99467b32401d80d8d3ea5f7a33b2acd44f47372d2e3872f
-rw-------. 1 root root 3615 Jan 1 10:22 00fd29ccc6f167fa991580690a00e844664cb2381c74cd14d539e36ca014f043
-rw-------. 1 root root 9357 Dec 29 13:05 021af8ef946e34a20dc2cdc06a82edfbd426249ee2c9d2f6dcd707c23a132aaa
-rw-------. 1 root root 7574 Dec 29 13:05 0232177273551cd33a469aeace543931598e028a1bf4b4591cc5d3dfeba5af64
-rw-------. 1 root root 811 Dec 29 13:05 02424f5e7e451ea699a4d8058d733f51d78658cd0fd86b07645cf158bfccc0ad
-rw-------. 1 root root 8112 Dec 29 13:05 02be064043ed0cf60bc3d572ced06159cbc4805766df624f9b4d2405a844d89a
-rw-------. 1 root root 3241 Dec 29 13:05 037fbf47952e2cfc291a23b19b0e665df1fa924b06f47e4d6eb2f1a1d459909b
-rw-------. 1 root root 1577 Dec 29 13:05 0388af444d5ac9b30c56e14f669ef917da437d316026f494d31bca315daa95e4
-rw-------. 1 root root 2804 Dec 29 13:05 039f1bb3922f20162d1f2e43dc308a21fb975eed0990f31fedd0cc19b4e335ab
-rw-------. 1 root root 7363 Dec 29 13:05 03d3db4469c289f4fd7fd626bcd01dc6fbd12d1ea0f8c1f2ade84f89523c3685
-rw-------. 1 root root 4149 Dec 29 13:05 04cf91413004c1d92387ee8d652e9c29c4448c0c26c9c9acf74f356a4261f2a9
-rw-------. 1 root root 4887 Dec 29 13:05 04ded2d551766603331838fdb689988e2b257a7ff7ea41ab4652e43afa977379
-rw-------. 1 root root 1194 Dec 29 13:05 05138b69f83fb7ebeac66ee84e7c7ca937edb2e3ae24ec55b3d5b167af2ef6ce
-rw-------. 1 root root 1286 Dec 29 13:05 058fafbdf5523cf24cc19b2dc46e611dff716af281a4d54745a7ec74d7b6a0a1
-rw-------. 1 root root 3978 Dec 29 13:05 05f608c6041e4f45a90734cd0c7d0bd081944f30470b6ed4fdc417f523db23f7
-rw-------. 1 root root 8592 Dec 29 13:05 0615533b88143b1b8f449a4d01ca339ebf02242d3a41d74f9140fabf176f5ce2
-rw-------. 1 root root 5807 Dec 29 13:05 0717bf27b9de19ad493026775f04e113fbc23bc1f966f6a1637c01560c5ecddf
-rw-------. 1 root root 9502 Dec 29 13:05 084085ef3ff7c1711fb984793696926842219401aa6a018b62b3a89d51a45dea
-rw-------. 1 root root 1149 Dec 29 13:05 084d63991302ebe404105920913a7ed851cf012e5b0f3e9c2b6a9fb6cf10214c
-rw-------. 1 root root 5962 Dec 29 13:05 0a928172a05ca4f8185b095e6a28877f7f68dbc55886323fef3b8353b65d3c97
-rw-------. 1 root root 1863 Dec 29 13:05 0aec253eb94e71d72336480e3408177ce67968d4ea1dcfabfe4f0d9e5f85ad70

At /var/lib/docker/image/devicemapper/, we see a repositories.json that stores image related data as json, extract is shown below:

Capture2

The folder /var/lib/docker/ stores all information about containers, images, networks and volumes.

  • To list all images we use – docker images
  • To remove an image we use – docker rmi <image-name/id>
  • To force remove an image that has a container – docker rmi -f <image-name/id>

When a container is running, the container consumer would need to do few things:

  • See all running containers with docker ps
  • See all container running or stopped with docker ps -a
  • Look at the log of the container with docker logs <container-name/id>, running container log with docker logs -f <container-name/id>
  • Look into metadata of the container with docker inspect <container-name/id>. We can even use inspect to look into image metadata too. Inspect output provides many more details along with IP Address of the container, its volume mounts, binds, network configuration etc.
  • Move containers across virtual network with docker network connect <network-name> <container-name> to join the virtual network and docker network disconnect <network-name> <container-name> to leave a virtual network. Network comes very handy when we need a seamless communication across containers and it helps logically putting all our containers within a same virtual network, easing loy of communication needs.
  • Stop a running container with docker stop <container-name/id>
  • Removing a container with docker rm <container-name/id>, if container is running we can force removal with docker rm -f <container-name/id> 

If we want to see how is container doing –

docker stats <container-name/id>
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
postgres2 0.00% 40.11MiB / 7.64GiB 0.51% 10.1kB / 6.81kB 29MB / 718kB 0

 

What all processes are running inside a container-

docker top <container-name/id>
UID PID PPID C STIME TTY TIME CMD
polkitd 24389 24368 0 2017 ? 00:00:02 postgres
polkitd 24437 24389 0 2017 ? 00:00:00 postgres: checkpointer process
polkitd 24438 24389 0 2017 ? 00:00:03 postgres: writer process
polkitd 24439 24389 0 2017 ? 00:00:03 postgres: wal writer process
polkitd 24440 24389 0 2017 ? 00:00:02 postgres: autovacuum launcher process
polkitd 24441 24389 0 2017 ? 00:00:05 postgres: stats collector process

If we want to get a shell inside a running container-

docker exec -ti <container-name/id> <command to run>
E.g.
[vikash.pandey]# docker exec -ti postgres2 echo hellp postgres
hellp postgres

 

Let’s take a very simple example, where we will be using nginx image to run a container. We will run the container in such a way that we edit index.html on host machine and we see the real time changes in application using browser.

Make a directory and create an index.html inside it. From within that directory run-

docker run -d --name livenginx -p 8050:80 -v $(pwd):/usr/share/nginx/html nginx

 

Once this is done hit http://<your docker host IP/NAME>:8050

Capture3

Keep editing index.html on host and see the changes by refreshing the page.

Capture4.JPG

Another change:

Capture5.JPG

Lets decode ‘docker run -d –name livenginx -p 8050:80 -v $(pwd):/usr/share/nginx/html nginx‘.

-d = run this container as detached, we can say like a daemon/service etc.

–name = giving a convenient name to this container

-p <host port>:<container port> = map host port to container port, i.e. all incoming request on port 8050 on host will be redirected to container on port 80 of the container. If we miss the pairing part of host port, docker finds and assigns a unique port on host to the container mapped port. In case you wish to specify the protocol (default its tcp), we can write it like <host port>:<container port>/udp

-v <host path>:<container path>: map host path into container path, so that each side’s edit gets reflected to one another. This is called bind mount of volume and is very very useful during development.

Volumes help persist container data else containers are ephemeral. More on volumes in upcoming post.

Before i conclude this one, when running a container, we may like to control resource allocation to it.

docker run -d -ti --cpu-shares 20 busybox sh

docker inspect --format="Memory: {{ .HostConfig.Memory}} CPUShares: {{ .HostConfig.CpuShares}}" kind_mcclintock
Memory: 0 CPUShares: 20

docker run -d -ti --cpu-shares 20 --memory 1GB busybox sh

 docker inspect --format="Memory: {{ .HostConfig.Memory}} CPUShares: {{ .HostConfig.CpuShares}}" confident_davinci
Memory: 1.073741824e+09 CPUShares: 20

If we don’t name our containers, docker by default generates convenient names for these containers. Container name has its own importance and we will talk about it when we will discuss about inter container communication and networking.

Apart from many other value that containers bring to us, these are very convenient and useful when we want to learn any tool like drupal, wordpress etc and it takes away multitude of steps that we would be doing if we go our traditional software installation way. It removes the complex installation and deployment step that often becomes a barrier to get started on something. It enables us to do things much more efficiently, test things at multiple platforms etc, like running a script on various Linux variants without the need to install these variants, just run a container with desired Linux variant and tear it down as you wish.

Stay tuned for volumes and network in part II………

 

 

SCRUM helps reducing Procrastination

While exploring, what procrastination is, it is “the practice of carrying out less urgent tasks in preference to more urgent ones, or doing more pleasurable things in place of less pleasurable ones, and thus putting off impending tasks to a later time”. And why do we procrastinate?

Came across various reasons, causes to procrastinate, here are few ones:

  • Lack of confidence
  • Easy Distraction
  • Feeling overwhelmed
  • Blocked creativity
  • Disliking the task

And then looked up to SCRUM or Agility for offerings to resolve procrastination.

Let’s look at, lack of confidence. The procrastinators set really high standards for them and that causes lack of confidence. Is my work going to meet the standard, won’t it expose me that I am not good at this skill/activity etc., and that tend to make us hold on to the task longer or keep pushing it for some other day. SCRUM’s principle of keep producing results at shorter interval and keep collecting feedback on it, goes hand in hand with the mantra for procrastinators to handle this cause, “production before perfection”. Act on it, produce results, get it reviewed and keep moving towards perfection, a contextual perfection, accepted by many (all stakeholders) than just you.

Second one, feeling overwhelmed, a big chunk of task, too many things to consider before acting, meaning too many reasons/risks to keep it on hold. SCRUM’s principle of breaking down tasks (the overwhelmed looking ones) into epics, stories, that can be thought through relatively quickly in a smaller context, could encourage to see more clearer picture, reduce overwhelmedness and enable one to start on task or activity at hand than keep procrastinating it. Setting up interim deadlines is another way to resolve feeling overwhelmed, producing and reviewing results at interim deadlines, confirms whether we are doing it right or need a correction. Timeboxed sprints, demos to stakeholders, retrospections are those events that helps us break the overwhelmed syndrome.

Third one, blocked creativity, often true, when we keep trying within ourselves. You want to get that work done. You’re sick of having it hang over you, but you’re out of good ideas. You’re working alone on a task and looking for a creative idea, maybe run your ideas by SCRUM team and see if that sparks the creativity you’re searching for. SCRUM encourages communication among SCRUM team and all stakeholders. Sharing your ideas to the team could be just that one thing.

Disliking the task, may be starting small on that very task with a sense of belongingness, contribution to the bigger cause that SCRUM team is set to, partnering in success with the team, learning on the go, could change the perception towards that very task/activity. We may find an exciting and efficient way to work on that very task. We may also realize the value of that task

Agility, adaptability, commitment and openness could help reduce procrastination.

Integration Testing Angular Applications – Part I

Continuing from my previous post on testing Angular application, Unit Testing Angular Application, this post is exploring integration testing approach for following features:

  • Component having property and event binding,
  • Directive,
  • Pipe

Component having property and event binding

Let’s look at what we have in this component, its usage and then we will see our integration test code.


//TS file

import { Component, Input, Output, EventEmitter } from '@angular/core';

@Component({
selector: 'app-voter',
templateUrl: './voter.component.html',
styleUrls: ['./voter.component.css']
})
export class VoterComponent {
@Input() othersVote = 0;
@Input() myVote = 0;

@Output() vote = new EventEmitter();

upVote() {
if (this.myVote == 1)
return;

this.myVote++;

this.vote.emit({ myVote: this.myVote });
}

downVote() {
if (this.myVote == -1)
return;

this.myVote--;

this.vote.emit({ myVote: this.myVote });
}

get totalVotes() {
return this.othersVote + this.myVote;
}
}

 


<!-- template file -->
<div class="voter">
<i
class="glyphicon glyphicon-menu-up vote-button"
[class.highlighted]="myVote == 1"
(click)="upVote()"></i>

<span class="vote-count">{{ totalVotes }}</span>

<i
class="glyphicon glyphicon-menu-down vote-button"
[class.highlighted]="myVote == -1"
(click)="downVote()"></i>
</div>

We are going to test following test cases –

  • should render total votes counter
  • should highlight upvote button when upVoted
  • should increase totalVotes when upvote button is clicked
  • should decrease totalVotes when downvote button is clicked

This is what we have in our test:


import { By } from '@angular/platform-browser';
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { VoterComponent } from './voter.component';

describe('VoterComponent', () => {
let component: VoterComponent;
let fixture: ComponentFixture<VoterComponent>;

beforeEach(() => {
TestBed.configureTestingModule({
declarations: [ VoterComponent ]
})
fixture = TestBed.createComponent(VoterComponent);
component = fixture.componentInstance;
});

it('should render total votes counter', () => {
component.othersVote = 20;
component.myVote = 1;

fixture.detectChanges();

let de = fixture.debugElement.query(By.css('.vote-count'));
let el: HTMLElement = de.nativeElement;
expect(el.innerText).toContain(21);
});

it('should highlight upvote button when upVoted', () => {
component.myVote = 1;

fixture.detectChanges();

let de = fixture.debugElement.query(By.css('.glyphicon-menu-up'));

expect(de.classes['highlighted']).toBeTruthy();
});

it('should increase totalVotes when upvote button is clicked', () => {
let button = fixture.debugElement.query(By.css('.glyphicon-menu-up'));

button.triggerEventHandler('click', null);

expect(component.totalVotes).toBe(1);
});

it('should decrease totalVotes when downvote button is clicked', () => {
let button = fixture.debugElement.query(By.css('.glyphicon-menu-down'));

button.triggerEventHandler('click', null);

expect(component.totalVotes).toBe(-1);
});
});

Few major differences from our Unit testing approach is that here we are not new’ing the component, we are using TestBed and configuring testing module as if it’s simulating our regular application module of the application, we use fixture and component and simulate HTML events, like button click etc, working directly with HTML elements of the template to act and expect.

Directive

Our directive is called HighlightDirective with a defaultColor and another color that can be set by consuming component via property binding , lets look at its code:


import { Directive, Input, ElementRef, OnChanges } from '@angular/core';

@Directive({
selector: '[highlight]'
})
export class HighlightDirective implements OnChanges {
defaultColor = 'pink';
@Input('highlight') bgColor: string;

constructor(private el: ElementRef) {
}

ngOnChanges() {
this.el.nativeElement.style.backgroundColor = this.bgColor || this.defaultColor;
}
}

We will be creating the component that will use this directive with following template:


@Component({
template: `
<p highlight="lightblue">First</p>
<p highlight>Second</p>

`
})
class DirectiveHostComponent {
}

So we are going to test following test cases –

  • should highlight 1st para with directives bgColor color
  • should highlight 2nd para with default background color
  • should set directives bgColor color with lightblue

Here is what we write in test spec file:


import { async, ComponentFixture, TestBed } from '@angular/core/testing';
import { HighlightDirective } from './highlight.directive';
import { By } from '@angular/platform-browser';
import { Component } from '@angular/core';

//Important to create component here so that we can apply directive to its template
//elements and test the effect.
@Component({
template: `
<p highlight="lightblue">First</p>
<p highlight>Second</p>

`
})
class DirectiveHostComponent {
}

describe('HighlightDirective', () => {
let fixture: ComponentFixture<DirectiveHostComponent>;

beforeEach(() => {
TestBed.configureTestingModule({
declarations: [ DirectiveHostComponent, HighlightDirective ]
});
fixture = TestBed.createComponent(DirectiveHostComponent);
fixture.detectChanges();
});

it('should highlight 1st para with directives bgColor color', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[0]; //get 1st para element

let directive = de.injector.get(HighlightDirective);
expect(de.nativeElement.style.backgroundColor).toBe(directive.bgColor);
});

it('should highlight 2nd para with default background color', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[1]; //get 2nd para element

let directive = de.injector.get(HighlightDirective);
expect(de.nativeElement.style.backgroundColor).toBe(directive.defaultColor);
});

it('should set directives bgColor color with lightblue', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[0];//get 1st para element

let directive = de.injector.get(HighlightDirective);
expect(directive.bgColor).toBe('lightblue');
});
});

To reduce dependency and keeping test clean we created the component that uses the directive in test spec file itself.

Pipe

Our pipe is going to transform provided text on which it is applied into TitleCase, lets see its code:


import { Pipe, PipeTransform } from '@angular/core';

@Pipe({
name: 'titlecase'
})
export class TitlecasePipe implements PipeTransform {

transform(input: any, args?: any): any {
if (typeof input !== 'string') {
throw new Error('Requires a String as input');
}
return input.length === 0 ? '' :
input.replace(/\w\S*/g, (txt => txt[0].toUpperCase() + txt.substr(1).toLowerCase() ));
}

}

The usage of the pipe:

<span>{{ title | titlecase }}</span>

The component that is using it should have test code as shown below:


describe('UserDetailsComponent', () => {
let component: UserDetailsComponent;
let fixture: ComponentFixture<UserDetailsComponent>;

beforeEach(() => {

TestBed.configureTestingModule({
imports: [],
declarations: [UserDetailsComponent, TitlecasePipe],
providers: [
]
})
fixture = TestBed.createComponent(UserDetailsComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});

&nbsp;

it('should convert title name to Title Case', () => {
const inputName = 'quick BROWN fox';
const titleCaseName = 'Quick Brown Fox';
let titleDisplay = fixture.debugElement.query(By.css('span')).nativeElement;
let titleInput = fixture.debugElement.query(By.css('input')).nativeElement;

// simulate user entering new name into the input box
titleInput.value = inputName;

// dispatch a DOM event so that Angular learns of input value change.
let evnt = document.createEvent('CustomEvent');
evnt.initCustomEvent('input', false, false, null);
titleInput.dispatchEvent(evnt);

// Tell Angular to update the output span through the title pipe
fixture.detectChanges();

expect(titleDisplay.textContent).toBe(titleCaseName);
});
});

And we can unit test the pipe by writing below code:


import { TitlecasePipe } from './titlecase.pipe';

describe('TitlecasePipe', () => {
const pipe = new TitlecasePipe();
it('create an instance', () => {
expect(pipe).toBeTruthy();
});
it('should work with empty string', () => {
expect(pipe.transform('')).toEqual('');
});

it('should titlecase given string input', () => {
expect(pipe.transform('wow')).toEqual('Wow');
});

it('should throw error with invalid values', () => {
//must use arrow function for expect to capture exception
expect(()=>pipe.transform(undefined)).toThrow();
expect(()=>pipe.transform(9)).toThrowError('Requires a String as input');
});
});

A point worth noting, when we create component, directive etc with ng generate utility, we see 2 copies of beforeEach as shown below:


beforeEach(async(() => {
TestBed.configureTestingModule({
declarations: [<<YourComponent>>]
})
.compileComponents();
}));

beforeEach(() => {
fixture = TestBed.createComponent(<<YourComponent>>);
component = fixture.componentInstance;
fixture.detectChanges();
});

Note the async version, we may safely remove that copy, because with @angular/cli, webpack is our default builder and packaging tool and webpack complies and provides inline template, so we are not required to reach out to file system asynchronously and compile it separately. Because of this reason, you see async version of beforeEach finds a miss it all the provided test spec codes in this post.

 

Great to see us writing clean, maintainable and well tested code! In part II of this post we will be exploring integration testing approach for Services and Routes.

Unit Testing Angular Applications

As and when we write an application, testing is one of most fundamental activity that we as developers are expected to do. It has lot of benefits, like it ensures we write quality code, maintainable code, while write test cases we identify coupling among application components and get an opportunity to re-look at our design with an aim to clear not required coupling and enhance code making it more maintainable and reduce unnecessary dependencies.

In this post we are going to start exploring how we would be building our unit test for following scenarios:

  • very basic function
  • testing strings and arrays
  • testing a simple class
  • testing a class having angular form in it
  • testing a service
  • testing a component that emits event

I will expand it to cover integration test where we will be exploring how to write test cases for most of the above mentioned scenarios in integration with Angular framework, where we will test routers, services, components simulating user interactions, in my upcoming post on this topic.

Unit Testing a basic function

Lets suppose we have a function named compute taking a number and increment it, if passed in value is >= zero.

 

export function compute(number) {
if (number &lt; 0)
return 0;
return number + 1;
}

In that very same folder create a file with suffix ‘spec.ts’, assume the function is written in compute.ts, create compute.spec.ts and add the code shown below:

import { compute } from './compute';
describe('compute', () =&gt; {
it('should return 0 when called with negative numbers', () =&gt; {
let result = compute(-1);
expect(result).toBe(0);
});
it('should increment by 1 when called with non negative numbers', () =&gt; {
const parameter = 1;
let result = compute(parameter);
expect(result).toBe(parameter + 1);
});
});

In our learning we are using @angular-cli as tool that is using karma and jasmine. This is all taken care and created for you when you create a new Angular application using @angular-cli with command:

ng new<<your-app-name>>

then you change directory to newly created application and run

ng test

This ensures the karma test engine is running and responding to changes you do to your test file(s).

You would see something like this in your console, where you ran ‘ng test’ –

Capture1

ng test also launches web interface at http://localhost:9876/, go to this URL, click on DEBUG button, open browser console with (F12), here you see how your tests are performing.

Capture2.JPG

 

A bit about karma/jasmine, describe is a function using which we write our test suite and inside it, using it function we write our test cases. You configure karma in karma.conf.js file available in your application, a quick looks confirms why we see karma’s web interface on port 9876.

An excerpt from karma.conf.js –


angularCli: {
config: './angular-cli.json',
environment: 'dev'
},
reporters: config.angularCli &amp;&amp; config.angularCli.codeCoverage
? ['progress', 'karma-remap-istanbul']
: ['progress'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'],
singleRun: false

 

Unit Testing strings and arrays

This is our code that we would like to write test for:


//greet.ts

export function greet(name) {
return 'Welcome ' + name;
}


//getCurrencies.ts

export function getCurrencies() {
return ['USD', 'AUD', 'EUR', 'INR'];
}

Lets look at the test:


//greet.spec.ts

import { greet } from './greet';

describe('greet', () =&gt; {
it('should contain passed param in the message', () =&gt; {
const parameter = 'Vikash';
expect(greet(parameter)).toContain(parameter);
});
});


//getCurrencies.spec.ts

import {getCurrencies } from './getCurrencies';

describe('getCurrencies', () =&gt; {
it('should return supported currencies', () =&gt; {
const currencies = getCurrencies();
expect(currencies).toContain('AUD');
expect(currencies).toContain('INR');
expect(currencies).toContain('USD');
})
});

Its almost very similar to our earlier test that we wrote for compute function. Please note that we could have got our greet test passed even with toBe(‘Welcome ‘ + parameter), but this makes our test very fragile and it could break easily if we change the static text part in our greet function, like from ‘Welcome’ to ‘Hello’ or ‘Hola’. toContains protect us from that fragility and is sufficiently good to cover what we need in our test. We would want to test that the data we are passing as parameter is part of the returned message from greet function.

 

Unit Testing a simple class

Here comes our very very simple class-


export class UserResponseComponent {
totalLikes = 0;
like() {
this.totalLikes++;
}
disLike() {
this.totalLikes--;
}
}

And here is our test file:


import { UserResponseComponent } from './user.response.component';

describe('UserResponseComponent', () =&gt; {
let userRespComp = null;
beforeEach(() =&gt; {
//Arrange
userRespComp = new UserResponseComponent();
})
it('should increment the totlaLikes counter by 1 when liked', () =&gt; {
//Act
userRespComp.like()
//Assert
expect(userRespComp.totalLikes).toBe(1);
});
it('should increment the totlaLikes counter by 1 when disliked', () =&gt; {
//Act
userRespComp.disLike()
//Assert
expect(userRespComp.totalLikes).toBe(-1);
});
});

Few things to note here:

We need to create an instance of this class, so that we can access it’s methods. Where should we create that instance, we could have created in each of the it function, that would go again our DRY (Don’t Repeat Yourself) principle.

The need is to create the instance for each test case, jasmine offers beforeEach function for this purpose only. The code inside beforeEach will executed before each of it function call.

We call our activities inside before each most commonly as Arrange, and inside each function we Act and Assert. There is afterEach also that we can use to tear down the setup we did in beforeEach.

Don’t forget to keep going back to your console and browser console to see how are your tests performing:)

By now we are little confident on testing framework at use and be ready to take on some complex ones. Lets look at testing a class that uses Angular Form.

Unit Testing a class having angular form in it

Here is how our class looks like –


import { FormBuilder, Validators, FormGroup } from '@angular/forms';

export class TodoFormComponent {
form: FormGroup;

constructor(fb: FormBuilder) {
this.form = fb.group({
name: ['', Validators.required],
email: [''],
});
}
}

And the test code –


import { FormBuilder } from '@angular/forms';
import { TodoFormComponent } from './todo-form.component';

describe('TodoFormComponent', () =&gt; {
var component: TodoFormComponent;

beforeEach(() =&gt; {
component = new TodoFormComponent(new FormBuilder());
});

it('should create form with 2 controls', () =&gt; {
expect(component.form.contains('name')).toBeTruthy();
expect(component.form.contains('email')).toBeTruthy();
});

it('should make name control as required when empty value is set', () =&gt; {
let control = component.form.get('name');
let value = '';
control.setValue(value);
expect(control.valid).toBeFalsy();
});
it('should make name control as required when null value is set', () =&gt; {
let control = component.form.get('name');
let value = null;
control.setValue(null);
expect(control.valid).toBeFalsy();
});
it('should pass required validation when a valid value is set', () =&gt; {
let control = component.form.get('name');
let value = 'Vikash';
control.setValue(value);
expect(control.valid).toBeTruthy();
});
});

In these tests we are ensuring that our form gets created with desired number of controls and we test to see based on value given to name form control whether it’s validator working for us or not.

How about a class that emits an event? Here comes our class that emits event –


import { EventEmitter } from '@angular/core';

export class UserResponseComponent {
totalLikes = 0;
likeChanged = new EventEmitter();

upLike() {
this.totalLikes++;
this.likeChanged.emit(this.totalLikes);
}
}

Here is our test code –


import { UserResponseComponent } from './user.response.component';

describe('UserResponsesComponent', () => {
var component: UserResponseComponent;

beforeEach(() => {
component = new UserResponseComponent();
});

it('should raise likeChanged event when upLiked', () => {
let totalLikes = null;
component.likeChanged.subscribe(tl => totalLikes = tl)
component.upLike();
expect(totalLikes).toBe(1);
});
});

Something to note here is that Events are Observables and during arrange phase of our test we are subscribing to it, so that once it gets emitted we set data received with event to our component so that we can use this fact during assertion.

Finally test a service and we are done for this long post:).

This is how our service is looking –


import { Http } from '@angular/http';
import 'rxjs/add/operator/map';

export class TodoService {
constructor(private http: Http) {
}
add(todo) {
return this.http.post('...', todo).map(r => r.json());
}

getTodos() {
return this.http.get('...').map(r => r.json());
}

delete(id) {
return this.http.delete('...').map(r => r.json());
}
}

 

The component using this service:


import { TodoService } from './todo.service'

export class TodosComponent {
todos: any[] = [];
message;

constructor(private service: TodoService) {}

ngOnInit() {
this.service.getTodos().subscribe(t => this.todos = t);
}

add() {
var newTodo = { title: '... ' };
this.service.add(newTodo).subscribe(
t => this.todos.push(t),
err => this.message = err);
}

delete(id) {
if (confirm('Are you sure?'))
this.service.delete(id).subscribe();
}
}

 

And the test code –


import { TodosComponent } from './todos.component';
import { TodoService } from './todo.service';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/from';
import 'rxjs/add/observable/empty';
import 'rxjs/add/observable/throw';
//import * as _ from 'lodash';

describe('TodosComponent', () => {
let component: TodosComponent;
let service: TodoService;

beforeEach(() => {

service = new TodoService(null); //manoeuvring with null to avoid http object creation and setup
component = new TodosComponent(service);
});

it('should set todos to the value returned by server via todo service', () => {
//here wee are spuing on method getTodos of TodoService, callFake takes the function
//it's faking on. We are getting control over the function we are faking
//Arrange
let todos = [1, 2, 3];
spyOn(service, 'getTodos').and.callFake(() => {
return Observable.from([todos]);
});

//Act
component.ngOnInit();

//Assert
//expect(component.todos.length).toBe(3);
expect(component.todos).toBe(todos); //more specific assertion
});

it('should call the server and save the new todo given to it', () => {
//here wee are spying on method getTodos of TodoService, callFake takes the function
//it's faking on. We are getting control over the function we are faking
//Arrange
let spy = spyOn(service, 'add').and.callFake(todo => {
return Observable.empty();
});

//Act
component.add();

//Assert
expect(spy).toHaveBeenCalled();
});

it('should add the todo returned from service add method', () => {
//here wee are spying on method add of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
let todo = { id: 1 };
let spy = spyOn(service, 'add').and.returnValue(Observable.from([todo]));

//Act
component.add();

//Assert

expect(component.todos.indexOf(todo)).toBeGreaterThan(-1);
});

it('should set message to error message from server', () => {
//here wee are spying on method add of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
let error = "error from server";
let spy = spyOn(service, 'add').and.returnValue(Observable.throw(error));

//Act
component.add();

//Assert
expect(component.message).toBe(error);
});

it('should call delete method of service when user confirms the window confirm popup', () => {
//here wee are spying on method delete of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
spyOn(window, 'confirm').and.returnValue(true);
let spy = spyOn(service, 'delete').and.returnValue(Observable.empty());

//Act
component.delete(10);

//Assert
expect(spy).toHaveBeenCalledWith(10);
});

it('should NOT call delete method of service when user confirms the window confirm popup', () => {
//here wee are spying on method delete of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
spyOn(window, 'confirm').and.returnValue(false);
let spy = spyOn(service, 'delete').and.returnValue(Observable.empty());

//Act
component.delete(10);

//Assert
expect(spy).not.toHaveBeenCalled();
});

});

Here we are using spyOn function to spy on the function of class (that we provide as 2nd and 1st parameter) and then can tweak its functionality, either using callFake or returnValue functions.

 

Thanks for being so long with me and this post, great to see you writing clean, maintainable and well tested code!

 

 

 

Working with Observables: Hot or Cold or something adjustable

Observable acts as an event emitter, sending a stream of events to any subjects that have subscribed to it and it can be of type hot and cold. Definition from RxJs:

Cold observables start running upon subscription, i.e., the observable sequence only starts pushing values to the observers when Subscribe is called. (…) This is different from hot observables such as mouse move events or stock tickers which are already producing values even before a subscription is active.

Cold One for example:

let obs = Observable.create(observer => observer.next(Date.now()));

obs.subscribe(v => console.log("Subscriber# 1: " + v));

obs.subscribe(v => console.log("Subscriber# 2: " + v));

produces:

Output suggests that its cold, because each subscription causes observable to produce the sequence, in this case calling Date.now().

Hot One:

let obs = Observable.interval(1000).publish();

 obs.connect();

setTimeout(() => {

 obs.subscribe(v => console.log("Subscriber# 1: " + v));

 setTimeout(

 () => obs.subscribe(v => console.log("Subscriber# 2: " + v)), 1000);

},2100);

produces:

Few things to note: The first subscriber gets value from 2, and 2nd from 3. One thing is clear per definition that its a hot one, because it started to produce sequence even there were no subscribers. Subscribers get values that is published upon there subscription and values that we published/emitted in past are lost for them.

What is going on here:

  • We use interval() to create an Observable that emits every second with an increasing index value starting at 0.
  • We use publsih to share the value producer across several subscriptions.
  • We subscribed our 1st subscriber after 2100 ms to ensure we miss 1st 2 emits, hence Subscriber# 1 sees 2 as its 1st received value (0 and 1 emitted during each passed second by interval).
  • We subscribed our 2nd subscriber 1000 ms after 1st subscriber, so it starts with 3 as its first subscription value.

The players making it happen: The job of the connect operator is to actually cause the ConnectableObservable to subscribe to the underlying source (the thing that produces values). Its the publish operator that creates ConnectableObservable that shares one single subscription to the underlying source. However, the publish operator doesn’t subscribe to the underlying source just yet, hence we had to call connect operator.

Now the question comes, when to use which one and answer starts to get blurry when we land into use case like, we need Observable that only starts generating values as the first subscriber subscribes and then shares and re-emits the exact same values to every new subscriber.

As a rule of thumb, when we have a cold Observable and we want multiple subscribers to it, and we don’t want them to cause regenerating the values but rather reusing existing values, we need to start thinking about publish and its friends.

Lets take an example to solidify what we have seen so far:

i have a component that lists out my activities one after other with an interval of 500 ms between them, some thing like this:

@Component({

 selector: 'my-worklist',

 template: `

 1st List 

 <ul>

 <li *ngFor="#activity of activities | async">{{activity.name}}</li>

 </ul>

 2nd List 

 <ul>

 <li *ngFor="#activity of activities2 | async">{{activity.name}}</li>

 </ul>

 `
})

export class WorklistComponent { 

 activities: Observable<Array<any>>;

 activities2: Observable<Array<any>>;

 constructor(http: Http) {

 this.activities = http.get('activities.json')

 .map(response => response.json().activityItems)

.publish()

.refCount();

 setTimeout(() => this.activities2 = this.activities, 500);

}

}

Note that we are using one of most common observable of Angular 2, that’s

and its a cold observer, We aim to turn it into kind of hot enough to fulfill following needs:

the 1st list shows up as soon as we get value from Http to activities and after 500 ms my 2nd activities2 observable should show up the previous emitted list even after it has missed that subscription as it subscribed after 500 ms.

if you run the above code, you will see 2nd list is missing, because we made it may be little too hot, here is how we make it sufficiently enough hot for our purposes:

@Component({

 selector: 'my-worklist',

 template: ` 

 1st List 

 <ul>

 <li *ngFor="#activity of activities | async">{{activity.name}}</li>

 </ul>

 2nd List 

 <ul>

 <li *ngFor="#activity of activities2 | async">{{activity.name}}</li>

 </ul>
 `

})

export class WorklistComponent { 

 activities: Observable<Array<any>>;

 activities2: Observable<Array<any>>;

 constructor(http: Http) {

 this.activities = http.get('activities.json')

 .map(response => response.json().activityItems)

 .share(); // share is short hand of publishLast and refCount.

 //.publishLast()

 //.refCount();

 setTimeout(() => this.activities2 = this.activities, 500);

}

}

What we rather wanted is that new subscribers see exactly the old values that were already emitted earlier and that’s what publishLast instead of publish does it for us.

 

 

2nd list shows up after 500 ms.

Have a happy picking, choosing out of Observables types per your need:).