Integration Testing Angular Applications – Part I

Continuing from my previous post on testing Angular application, Unit Testing Angular Application, this post is exploring integration testing approach for following features:

  • Component having property and event binding,
  • Directive,
  • Pipe

Component having property and event binding

Let’s look at what we have in this component, its usage and then we will see our integration test code.


//TS file

import { Component, Input, Output, EventEmitter } from '@angular/core';

@Component({
selector: 'app-voter',
templateUrl: './voter.component.html',
styleUrls: ['./voter.component.css']
})
export class VoterComponent {
@Input() othersVote = 0;
@Input() myVote = 0;

@Output() vote = new EventEmitter();

upVote() {
if (this.myVote == 1)
return;

this.myVote++;

this.vote.emit({ myVote: this.myVote });
}

downVote() {
if (this.myVote == -1)
return;

this.myVote--;

this.vote.emit({ myVote: this.myVote });
}

get totalVotes() {
return this.othersVote + this.myVote;
}
}

 


<!-- template file -->
<div class="voter">
<i
class="glyphicon glyphicon-menu-up vote-button"
[class.highlighted]="myVote == 1"
(click)="upVote()"></i>

<span class="vote-count">{{ totalVotes }}</span>

<i
class="glyphicon glyphicon-menu-down vote-button"
[class.highlighted]="myVote == -1"
(click)="downVote()"></i>
</div>

We are going to test following test cases –

  • should render total votes counter
  • should highlight upvote button when upVoted
  • should increase totalVotes when upvote button is clicked
  • should decrease totalVotes when downvote button is clicked

This is what we have in our test:


import { By } from '@angular/platform-browser';
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { VoterComponent } from './voter.component';

describe('VoterComponent', () => {
let component: VoterComponent;
let fixture: ComponentFixture<VoterComponent>;

beforeEach(() => {
TestBed.configureTestingModule({
declarations: [ VoterComponent ]
})
fixture = TestBed.createComponent(VoterComponent);
component = fixture.componentInstance;
});

it('should render total votes counter', () => {
component.othersVote = 20;
component.myVote = 1;

fixture.detectChanges();

let de = fixture.debugElement.query(By.css('.vote-count'));
let el: HTMLElement = de.nativeElement;
expect(el.innerText).toContain(21);
});

it('should highlight upvote button when upVoted', () => {
component.myVote = 1;

fixture.detectChanges();

let de = fixture.debugElement.query(By.css('.glyphicon-menu-up'));

expect(de.classes['highlighted']).toBeTruthy();
});

it('should increase totalVotes when upvote button is clicked', () => {
let button = fixture.debugElement.query(By.css('.glyphicon-menu-up'));

button.triggerEventHandler('click', null);

expect(component.totalVotes).toBe(1);
});

it('should decrease totalVotes when downvote button is clicked', () => {
let button = fixture.debugElement.query(By.css('.glyphicon-menu-down'));

button.triggerEventHandler('click', null);

expect(component.totalVotes).toBe(-1);
});
});

Few major differences from our Unit testing approach is that here we are not new’ing the component, we are using TestBed and configuring testing module as if it’s simulating our regular application module of the application, we use fixture and component and simulate HTML events, like button click etc, working directly with HTML elements of the template to act and expect.

Directive

Our directive is called HighlightDirective with a defaultColor and another color that can be set by consuming component via property binding , lets look at its code:


import { Directive, Input, ElementRef, OnChanges } from '@angular/core';

@Directive({
selector: '[highlight]'
})
export class HighlightDirective implements OnChanges {
defaultColor = 'pink';
@Input('highlight') bgColor: string;

constructor(private el: ElementRef) {
}

ngOnChanges() {
this.el.nativeElement.style.backgroundColor = this.bgColor || this.defaultColor;
}
}

We will be creating the component that will use this directive with following template:


@Component({
template: `
<p highlight="lightblue">First</p>
<p highlight>Second</p>

`
})
class DirectiveHostComponent {
}

So we are going to test following test cases –

  • should highlight 1st para with directives bgColor color
  • should highlight 2nd para with default background color
  • should set directives bgColor color with lightblue

Here is what we write in test spec file:


import { async, ComponentFixture, TestBed } from '@angular/core/testing';
import { HighlightDirective } from './highlight.directive';
import { By } from '@angular/platform-browser';
import { Component } from '@angular/core';

//Important to create component here so that we can apply directive to its template
//elements and test the effect.
@Component({
template: `
<p highlight="lightblue">First</p>
<p highlight>Second</p>

`
})
class DirectiveHostComponent {
}

describe('HighlightDirective', () => {
let fixture: ComponentFixture<DirectiveHostComponent>;

beforeEach(() => {
TestBed.configureTestingModule({
declarations: [ DirectiveHostComponent, HighlightDirective ]
});
fixture = TestBed.createComponent(DirectiveHostComponent);
fixture.detectChanges();
});

it('should highlight 1st para with directives bgColor color', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[0]; //get 1st para element

let directive = de.injector.get(HighlightDirective);
expect(de.nativeElement.style.backgroundColor).toBe(directive.bgColor);
});

it('should highlight 2nd para with default background color', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[1]; //get 2nd para element

let directive = de.injector.get(HighlightDirective);
expect(de.nativeElement.style.backgroundColor).toBe(directive.defaultColor);
});

it('should set directives bgColor color with lightblue', () => {
let de = fixture.debugElement.queryAll(By.css('p'))[0];//get 1st para element

let directive = de.injector.get(HighlightDirective);
expect(directive.bgColor).toBe('lightblue');
});
});

To reduce dependency and keeping test clean we created the component that uses the directive in test spec file itself.

Pipe

Our pipe is going to transform provided text on which it is applied into TitleCase, lets see its code:


import { Pipe, PipeTransform } from '@angular/core';

@Pipe({
name: 'titlecase'
})
export class TitlecasePipe implements PipeTransform {

transform(input: any, args?: any): any {
if (typeof input !== 'string') {
throw new Error('Requires a String as input');
}
return input.length === 0 ? '' :
input.replace(/\w\S*/g, (txt => txt[0].toUpperCase() + txt.substr(1).toLowerCase() ));
}

}

The usage of the pipe:

<span>{{ title | titlecase }}</span>

The component that is using it should have test code as shown below:


describe('UserDetailsComponent', () => {
let component: UserDetailsComponent;
let fixture: ComponentFixture<UserDetailsComponent>;

beforeEach(() => {

TestBed.configureTestingModule({
imports: [],
declarations: [UserDetailsComponent, TitlecasePipe],
providers: [
]
})
fixture = TestBed.createComponent(UserDetailsComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});

&nbsp;

it('should convert title name to Title Case', () => {
const inputName = 'quick BROWN fox';
const titleCaseName = 'Quick Brown Fox';
let titleDisplay = fixture.debugElement.query(By.css('span')).nativeElement;
let titleInput = fixture.debugElement.query(By.css('input')).nativeElement;

// simulate user entering new name into the input box
titleInput.value = inputName;

// dispatch a DOM event so that Angular learns of input value change.
let evnt = document.createEvent('CustomEvent');
evnt.initCustomEvent('input', false, false, null);
titleInput.dispatchEvent(evnt);

// Tell Angular to update the output span through the title pipe
fixture.detectChanges();

expect(titleDisplay.textContent).toBe(titleCaseName);
});
});

And we can unit test the pipe by writing below code:


import { TitlecasePipe } from './titlecase.pipe';

describe('TitlecasePipe', () => {
const pipe = new TitlecasePipe();
it('create an instance', () => {
expect(pipe).toBeTruthy();
});
it('should work with empty string', () => {
expect(pipe.transform('')).toEqual('');
});

it('should titlecase given string input', () => {
expect(pipe.transform('wow')).toEqual('Wow');
});

it('should throw error with invalid values', () => {
//must use arrow function for expect to capture exception
expect(()=>pipe.transform(undefined)).toThrow();
expect(()=>pipe.transform(9)).toThrowError('Requires a String as input');
});
});

A point worth noting, when we create component, directive etc with ng generate utility, we see 2 copies of beforeEach as shown below:


beforeEach(async(() => {
TestBed.configureTestingModule({
declarations: [<<YourComponent>>]
})
.compileComponents();
}));

beforeEach(() => {
fixture = TestBed.createComponent(<<YourComponent>>);
component = fixture.componentInstance;
fixture.detectChanges();
});

Note the async version, we may safely remove that copy, because with @angular/cli, webpack is our default builder and packaging tool and webpack complies and provides inline template, so we are not required to reach out to file system asynchronously and compile it separately. Because of this reason, you see async version of beforeEach finds a miss it all the provided test spec codes in this post.

 

Great to see us writing clean, maintainable and well tested code! In part II of this post we will be exploring integration testing approach for Services and Routes.

Advertisements

Unit Testing Angular Applications

As and when we write an application, testing is one of most fundamental activity that we as developers are expected to do. It has lot of benefits, like it ensures we write quality code, maintainable code, while write test cases we identify coupling among application components and get an opportunity to re-look at our design with an aim to clear not required coupling and enhance code making it more maintainable and reduce unnecessary dependencies.

In this post we are going to start exploring how we would be building our unit test for following scenarios:

  • very basic function
  • testing strings and arrays
  • testing a simple class
  • testing a class having angular form in it
  • testing a service
  • testing a component that emits event

I will expand it to cover integration test where we will be exploring how to write test cases for most of the above mentioned scenarios in integration with Angular framework, where we will test routers, services, components simulating user interactions, in my upcoming post on this topic.

Unit Testing a basic function

Lets suppose we have a function named compute taking a number and increment it, if passed in value is >= zero.

 

export function compute(number) {
if (number &lt; 0)
return 0;
return number + 1;
}

In that very same folder create a file with suffix ‘spec.ts’, assume the function is written in compute.ts, create compute.spec.ts and add the code shown below:

import { compute } from './compute';
describe('compute', () =&gt; {
it('should return 0 when called with negative numbers', () =&gt; {
let result = compute(-1);
expect(result).toBe(0);
});
it('should increment by 1 when called with non negative numbers', () =&gt; {
const parameter = 1;
let result = compute(parameter);
expect(result).toBe(parameter + 1);
});
});

In our learning we are using @angular-cli as tool that is using karma and jasmine. This is all taken care and created for you when you create a new Angular application using @angular-cli with command:

ng new<<your-app-name>>

then you change directory to newly created application and run

ng test

This ensures the karma test engine is running and responding to changes you do to your test file(s).

You would see something like this in your console, where you ran ‘ng test’ –

Capture1

ng test also launches web interface at http://localhost:9876/, go to this URL, click on DEBUG button, open browser console with (F12), here you see how your tests are performing.

Capture2.JPG

 

A bit about karma/jasmine, describe is a function using which we write our test suite and inside it, using it function we write our test cases. You configure karma in karma.conf.js file available in your application, a quick looks confirms why we see karma’s web interface on port 9876.

An excerpt from karma.conf.js –


angularCli: {
config: './angular-cli.json',
environment: 'dev'
},
reporters: config.angularCli &amp;&amp; config.angularCli.codeCoverage
? ['progress', 'karma-remap-istanbul']
: ['progress'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'],
singleRun: false

 

Unit Testing strings and arrays

This is our code that we would like to write test for:


//greet.ts

export function greet(name) {
return 'Welcome ' + name;
}


//getCurrencies.ts

export function getCurrencies() {
return ['USD', 'AUD', 'EUR', 'INR'];
}

Lets look at the test:


//greet.spec.ts

import { greet } from './greet';

describe('greet', () =&gt; {
it('should contain passed param in the message', () =&gt; {
const parameter = 'Vikash';
expect(greet(parameter)).toContain(parameter);
});
});


//getCurrencies.spec.ts

import {getCurrencies } from './getCurrencies';

describe('getCurrencies', () =&gt; {
it('should return supported currencies', () =&gt; {
const currencies = getCurrencies();
expect(currencies).toContain('AUD');
expect(currencies).toContain('INR');
expect(currencies).toContain('USD');
})
});

Its almost very similar to our earlier test that we wrote for compute function. Please note that we could have got our greet test passed even with toBe(‘Welcome ‘ + parameter), but this makes our test very fragile and it could break easily if we change the static text part in our greet function, like from ‘Welcome’ to ‘Hello’ or ‘Hola’. toContains protect us from that fragility and is sufficiently good to cover what we need in our test. We would want to test that the data we are passing as parameter is part of the returned message from greet function.

 

Unit Testing a simple class

Here comes our very very simple class-


export class UserResponseComponent {
totalLikes = 0;
like() {
this.totalLikes++;
}
disLike() {
this.totalLikes--;
}
}

And here is our test file:


import { UserResponseComponent } from './user.response.component';

describe('UserResponseComponent', () =&gt; {
let userRespComp = null;
beforeEach(() =&gt; {
//Arrange
userRespComp = new UserResponseComponent();
})
it('should increment the totlaLikes counter by 1 when liked', () =&gt; {
//Act
userRespComp.like()
//Assert
expect(userRespComp.totalLikes).toBe(1);
});
it('should increment the totlaLikes counter by 1 when disliked', () =&gt; {
//Act
userRespComp.disLike()
//Assert
expect(userRespComp.totalLikes).toBe(-1);
});
});

Few things to note here:

We need to create an instance of this class, so that we can access it’s methods. Where should we create that instance, we could have created in each of the it function, that would go again our DRY (Don’t Repeat Yourself) principle.

The need is to create the instance for each test case, jasmine offers beforeEach function for this purpose only. The code inside beforeEach will executed before each of it function call.

We call our activities inside before each most commonly as Arrange, and inside each function we Act and Assert. There is afterEach also that we can use to tear down the setup we did in beforeEach.

Don’t forget to keep going back to your console and browser console to see how are your tests performing:)

By now we are little confident on testing framework at use and be ready to take on some complex ones. Lets look at testing a class that uses Angular Form.

Unit Testing a class having angular form in it

Here is how our class looks like –


import { FormBuilder, Validators, FormGroup } from '@angular/forms';

export class TodoFormComponent {
form: FormGroup;

constructor(fb: FormBuilder) {
this.form = fb.group({
name: ['', Validators.required],
email: [''],
});
}
}

And the test code –


import { FormBuilder } from '@angular/forms';
import { TodoFormComponent } from './todo-form.component';

describe('TodoFormComponent', () =&gt; {
var component: TodoFormComponent;

beforeEach(() =&gt; {
component = new TodoFormComponent(new FormBuilder());
});

it('should create form with 2 controls', () =&gt; {
expect(component.form.contains('name')).toBeTruthy();
expect(component.form.contains('email')).toBeTruthy();
});

it('should make name control as required when empty value is set', () =&gt; {
let control = component.form.get('name');
let value = '';
control.setValue(value);
expect(control.valid).toBeFalsy();
});
it('should make name control as required when null value is set', () =&gt; {
let control = component.form.get('name');
let value = null;
control.setValue(null);
expect(control.valid).toBeFalsy();
});
it('should pass required validation when a valid value is set', () =&gt; {
let control = component.form.get('name');
let value = 'Vikash';
control.setValue(value);
expect(control.valid).toBeTruthy();
});
});

In these tests we are ensuring that our form gets created with desired number of controls and we test to see based on value given to name form control whether it’s validator working for us or not.

How about a class that emits an event? Here comes our class that emits event –


import { EventEmitter } from '@angular/core';

export class UserResponseComponent {
totalLikes = 0;
likeChanged = new EventEmitter();

upLike() {
this.totalLikes++;
this.likeChanged.emit(this.totalLikes);
}
}

Here is our test code –


import { UserResponseComponent } from './user.response.component';

describe('UserResponsesComponent', () => {
var component: UserResponseComponent;

beforeEach(() => {
component = new UserResponseComponent();
});

it('should raise likeChanged event when upLiked', () => {
let totalLikes = null;
component.likeChanged.subscribe(tl => totalLikes = tl)
component.upLike();
expect(totalLikes).toBe(1);
});
});

Something to note here is that Events are Observables and during arrange phase of our test we are subscribing to it, so that once it gets emitted we set data received with event to our component so that we can use this fact during assertion.

Finally test a service and we are done for this long post:).

This is how our service is looking –


import { Http } from '@angular/http';
import 'rxjs/add/operator/map';

export class TodoService {
constructor(private http: Http) {
}
add(todo) {
return this.http.post('...', todo).map(r => r.json());
}

getTodos() {
return this.http.get('...').map(r => r.json());
}

delete(id) {
return this.http.delete('...').map(r => r.json());
}
}

 

The component using this service:


import { TodoService } from './todo.service'

export class TodosComponent {
todos: any[] = [];
message;

constructor(private service: TodoService) {}

ngOnInit() {
this.service.getTodos().subscribe(t => this.todos = t);
}

add() {
var newTodo = { title: '... ' };
this.service.add(newTodo).subscribe(
t => this.todos.push(t),
err => this.message = err);
}

delete(id) {
if (confirm('Are you sure?'))
this.service.delete(id).subscribe();
}
}

 

And the test code –


import { TodosComponent } from './todos.component';
import { TodoService } from './todo.service';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/from';
import 'rxjs/add/observable/empty';
import 'rxjs/add/observable/throw';
//import * as _ from 'lodash';

describe('TodosComponent', () => {
let component: TodosComponent;
let service: TodoService;

beforeEach(() => {

service = new TodoService(null); //manoeuvring with null to avoid http object creation and setup
component = new TodosComponent(service);
});

it('should set todos to the value returned by server via todo service', () => {
//here wee are spuing on method getTodos of TodoService, callFake takes the function
//it's faking on. We are getting control over the function we are faking
//Arrange
let todos = [1, 2, 3];
spyOn(service, 'getTodos').and.callFake(() => {
return Observable.from([todos]);
});

//Act
component.ngOnInit();

//Assert
//expect(component.todos.length).toBe(3);
expect(component.todos).toBe(todos); //more specific assertion
});

it('should call the server and save the new todo given to it', () => {
//here wee are spying on method getTodos of TodoService, callFake takes the function
//it's faking on. We are getting control over the function we are faking
//Arrange
let spy = spyOn(service, 'add').and.callFake(todo => {
return Observable.empty();
});

//Act
component.add();

//Assert
expect(spy).toHaveBeenCalled();
});

it('should add the todo returned from service add method', () => {
//here wee are spying on method add of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
let todo = { id: 1 };
let spy = spyOn(service, 'add').and.returnValue(Observable.from([todo]));

//Act
component.add();

//Assert

expect(component.todos.indexOf(todo)).toBeGreaterThan(-1);
});

it('should set message to error message from server', () => {
//here wee are spying on method add of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
let error = "error from server";
let spy = spyOn(service, 'add').and.returnValue(Observable.throw(error));

//Act
component.add();

//Assert
expect(component.message).toBe(error);
});

it('should call delete method of service when user confirms the window confirm popup', () => {
//here wee are spying on method delete of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
spyOn(window, 'confirm').and.returnValue(true);
let spy = spyOn(service, 'delete').and.returnValue(Observable.empty());

//Act
component.delete(10);

//Assert
expect(spy).toHaveBeenCalledWith(10);
});

it('should NOT call delete method of service when user confirms the window confirm popup', () => {
//here wee are spying on method delete of TodoService, returnValue allows us
//to retun Observables, that we created using convenience functions
//Arrange
spyOn(window, 'confirm').and.returnValue(false);
let spy = spyOn(service, 'delete').and.returnValue(Observable.empty());

//Act
component.delete(10);

//Assert
expect(spy).not.toHaveBeenCalled();
});

});

Here we are using spyOn function to spy on the function of class (that we provide as 2nd and 1st parameter) and then can tweak its functionality, either using callFake or returnValue functions.

 

Thanks for being so long with me and this post, great to see you writing clean, maintainable and well tested code!

 

 

 

Working with Observables: Hot or Cold or something adjustable

Observable acts as an event emitter, sending a stream of events to any subjects that have subscribed to it and it can be of type hot and cold. Definition from RxJs:

Cold observables start running upon subscription, i.e., the observable sequence only starts pushing values to the observers when Subscribe is called. (…) This is different from hot observables such as mouse move events or stock tickers which are already producing values even before a subscription is active.

Cold One for example:

let obs = Observable.create(observer => observer.next(Date.now()));

obs.subscribe(v => console.log("Subscriber# 1: " + v));

obs.subscribe(v => console.log("Subscriber# 2: " + v));

produces:

Output suggests that its cold, because each subscription causes observable to produce the sequence, in this case calling Date.now().

Hot One:

let obs = Observable.interval(1000).publish();

 obs.connect();

setTimeout(() => {

 obs.subscribe(v => console.log("Subscriber# 1: " + v));

 setTimeout(

 () => obs.subscribe(v => console.log("Subscriber# 2: " + v)), 1000);

},2100);

produces:

Few things to note: The first subscriber gets value from 2, and 2nd from 3. One thing is clear per definition that its a hot one, because it started to produce sequence even there were no subscribers. Subscribers get values that is published upon there subscription and values that we published/emitted in past are lost for them.

What is going on here:

  • We use interval() to create an Observable that emits every second with an increasing index value starting at 0.
  • We use publsih to share the value producer across several subscriptions.
  • We subscribed our 1st subscriber after 2100 ms to ensure we miss 1st 2 emits, hence Subscriber# 1 sees 2 as its 1st received value (0 and 1 emitted during each passed second by interval).
  • We subscribed our 2nd subscriber 1000 ms after 1st subscriber, so it starts with 3 as its first subscription value.

The players making it happen: The job of the connect operator is to actually cause the ConnectableObservable to subscribe to the underlying source (the thing that produces values). Its the publish operator that creates ConnectableObservable that shares one single subscription to the underlying source. However, the publish operator doesn’t subscribe to the underlying source just yet, hence we had to call connect operator.

Now the question comes, when to use which one and answer starts to get blurry when we land into use case like, we need Observable that only starts generating values as the first subscriber subscribes and then shares and re-emits the exact same values to every new subscriber.

As a rule of thumb, when we have a cold Observable and we want multiple subscribers to it, and we don’t want them to cause regenerating the values but rather reusing existing values, we need to start thinking about publish and its friends.

Lets take an example to solidify what we have seen so far:

i have a component that lists out my activities one after other with an interval of 500 ms between them, some thing like this:

@Component({

 selector: 'my-worklist',

 template: `

 1st List 

 <ul>

 <li *ngFor="#activity of activities | async">{{activity.name}}</li>

 </ul>

 2nd List 

 <ul>

 <li *ngFor="#activity of activities2 | async">{{activity.name}}</li>

 </ul>

 `
})

export class WorklistComponent { 

 activities: Observable<Array<any>>;

 activities2: Observable<Array<any>>;

 constructor(http: Http) {

 this.activities = http.get('activities.json')

 .map(response => response.json().activityItems)

.publish()

.refCount();

 setTimeout(() => this.activities2 = this.activities, 500);

}

}

Note that we are using one of most common observable of Angular 2, that’s

and its a cold observer, We aim to turn it into kind of hot enough to fulfill following needs:

the 1st list shows up as soon as we get value from Http to activities and after 500 ms my 2nd activities2 observable should show up the previous emitted list even after it has missed that subscription as it subscribed after 500 ms.

if you run the above code, you will see 2nd list is missing, because we made it may be little too hot, here is how we make it sufficiently enough hot for our purposes:

@Component({

 selector: 'my-worklist',

 template: ` 

 1st List 

 <ul>

 <li *ngFor="#activity of activities | async">{{activity.name}}</li>

 </ul>

 2nd List 

 <ul>

 <li *ngFor="#activity of activities2 | async">{{activity.name}}</li>

 </ul>
 `

})

export class WorklistComponent { 

 activities: Observable<Array<any>>;

 activities2: Observable<Array<any>>;

 constructor(http: Http) {

 this.activities = http.get('activities.json')

 .map(response => response.json().activityItems)

 .share(); // share is short hand of publishLast and refCount.

 //.publishLast()

 //.refCount();

 setTimeout(() => this.activities2 = this.activities, 500);

}

}

What we rather wanted is that new subscribers see exactly the old values that were already emitted earlier and that’s what publishLast instead of publish does it for us.

 

 

2nd list shows up after 500 ms.

Have a happy picking, choosing out of Observables types per your need:).

From Virtual Machines towards Docker Containers

In general, most of 3-tier applications require one to install Database, Application Server and finally deploying the application within the application server etc. Applications can be built using maven producing war file that gets published in distribution registry/repository. Consumer of the application is responsible for setting up database and application server and deploying the web application in application server. This approach has worked and still works, with this some of the regular issues/support request one gets:

  • Consumer don’t have expertise on either application server, database or even both.
  • How to make application server and database communicate with each other.
  • The third party jars at our end is different than the one your application expects.
  • It worked in our Dev environment, but troubling us in Test/UAT etc.

 

In last few week we attempted to find a solution to these issues/support requests. We looked at Docker containers as an alternate to our current setup. That lead us to work on how we would migrate our web application from standalone setup (virtual machines, desktop etc.) to docker containers, with a goal of having some of the below listed advantages:

  • Using Docker containers enables us to deploy ready-to-run, portable software. Containerized applications are not installed; they simply run within their containers
  • Using Docker containers eliminates problems such as software conflicts, driver compatibility issues, and library conflicts. The architecture of Docker containers enables you to isolate resources.
  • Docker containers empower micro service architectures, in which monolithic applications are decoupled into minimalist, specialized containers.
  • Docker containers simplify DevOps. Developers can work inside the containers, and operations engineers can work in parallel outside the containers.
  • The use of Docker containers speeds up continuous integration. Traditional installer development teams struggle with rapid build-test-deploy cycles. Docker containers ensure that applications run identically in development, test, and production environments.
  • Docker containers can be run from anywhere: computers, local servers, and private and public clouds.
  • Deploying your products as Docker containers helps you reach new customer segments.

In addition, Docker containers offer various advantages over virtual machines:

  • A hypervisor is required to host and maintain virtual machines; a system administrator needs to maintain the hypervisor itself. However, the Docker engine is a lightweight container virtualization technology that runs on the host operating system, and Docker containers can be started very quickly. Thus, the Docker technology alleviates the efforts of system administrators.
  • Although virtual machines and Docker containers both offer resource isolation, Docker containers share the operating system’s kernel and appropriate bins or libraries with other Docker containers. The result is faster deployment, less overhead, easier migration, and faster restarts.
  • Docker containers are imported into the Docker engine as simple .tar files, enabling Docker containers to spin up and down very quickly. The Docker engine handles virtualization parameters such as allocation of the file system when the application is launched. To successfully run an application on a virtual machine, the hypervisor needs to first import the virtual machine and then power it up; this takes time.Source: http://helpnet.flexerasoftware.com/installanywhere2015/Content/helplibrary/DockerBenefits.htm

With Docker containers we can now install, deploy only those software that are required by our application, we can configure the communication between components, we can keep only those third party jars that we need and can control their specific versions. We bundle all these artifacts as part of a Docker image and publish it for our consumers to pull it and just run it.

It gave us answers to most of the above listed issues/support requests. No third party jar version issue, no need to build expertise at consumer’s end regarding application server and database, no more different set of artifacts at different setup like DEV or TEST or UAT or even PRODUCTION. If a container been tested at TEST/UAT, technically everything is same in PRODUCTION and should work as expected and observed during TEST/UAT.

What all it takes to build a Docker image. Docker images are driven by Dockerfile, mostly it falls into the responsibility and expertise of the application developing/providing organization. The team needs to write a Dockerfile, like shown below:

FROM mysql:5.6

MAINTAINER VP <vikash1976@gmail.com>

RUN echo "Setting up Tomcat MySQL and a Web app container"

#Installing wget, vim and open jdk using apt

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y wget vim openjdk-7-jre

#Tomcat setup.....

RUN mkdir -p /root/my-app

WORKDIR /root/my-app

RUN wget http://archive.apache.org/dist/tomcat/tomcat-7/v7.0.69/bin/apache-tomcat-7.0.69.tar.gz && \

 gunzip apache-tomcat-7.0.69.tar.gz && \

 tar -xvf apache-tomcat-7.0.69.tar

#Configuring Tomcat

RUN rm -rf /root/my-app/apache-tomcat-7.0.69/bin/catalina.sh && \

 rm -rf /root/my-app/apache-tomcat-7.0.69/conf/server.xml && \

 rm -rf /root/my-app/apache-tomcat-7.0.69/webapps/*

COPY ./to-container/catalina.sh /root/my-app/apache-tomcat-7.0.69/bin/

COPY ./to-container/server.xml /root/my-app/apache-tomcat-7.0.69/conf/

#Adding mysql & spring-agent jar

ADD ./to-container/mysql-connector-java-5.1.12.jar /root/my-app/apache-tomcat-7.0.69/lib

ADD ./to-container/spring-agent.jar /root/my-app/apache-tomcat-7.0.69/lib

#Chaning file permission

RUN chmod 755 -R /root/my-app/apache-tomcat-7.0.69/

 

#Deploying my-app war

ADD ./to-container/my-app-portal.war /root/my-app/apache-tomcat-7.0.69/webapps

#Exposing container port

EXPOSE 8080

#Configuring MySQl setting as per web app need

ADD ./to-container/my.cnf /etc/mysql

#my-app.sql is schema creation script for the web app

ADD ./to-container/my-app.sql /root/my-app/

RUN chmod 755 /etc/mysql/my.cnf

#Staring MySQL, creating schema and starting app server

CMD service mysql start && mysql -uroot -proot < /root/my-app/my-app.sql && /root/my-app/apache-tomcat-7.0.69/bin/catalina.sh run

Once we have this Dockerfile, we build the image with following Docker command:

docker build --no-cache=false --build-arg http_proxy=<<proxyhost:port>> -t my-app/tc-mysql-dv-1 

This command builds the Docker image that can be published to Docker repository and can be pulled in and containers can be instantiated out of this image. We could have 2 separate containers, one for database and another for application server having web application deployed within and link the application server one with the database one. We found it will more on the consumer side, hence we opted to bundle everything required in one image. One image has MySQL, Tomcat, our web application, all required configurations, settings, the consumer just need to pull it and instantiate the container and is ready to work with the application.

What consumers need to do to create a Docker container out of the built and published Docker image:

  • Pull the image from Docker registry
docker pull <<your docker registry host: port>>/<<image name (in this case my-app/tc-mysql-dv-1)>
  • Run the container
docker run -p 8090:8080 --memory=2GB --name=my-app-mysql-dc1 -itd <<docker image id/image name (in this case my-app/tc-mysql-dv-1)>

*Note: in the above command with –p, we are saying that map 8080 port of Docker container to 8090 port of the host machine, so that we can access our application on 8090.

Try and access the application at http://<<yourhost:port(in this case 8090)>>/my-app-portal/

This container is good to run the application, play with it, once done; stop and destroy it. Containers are ephemeral. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration. What happens to what all data we generated while playing with this container? It’s all gone, but but but… no worries we have many ways through which we can configure our container and expect it to persist generated data and then even after we stop and destroy the container we can run another container based out of image and configure it to read/write data to the same persistence where earlier container have done so.

To achieve that we need to do the following:

  • Create volume container(s)
docker create -v /var/lib/mysql --name mysqlstore centos -->Container persistence for data base operations

docker create -v /root/my-app/jcr-repo --name jcrfilestore centos -->Container persistence for read write, like file based JCR content repository
  • And then run the container with following Docker command:
docker run -p 8090:8080 --memory=2GB --name=my-app-mysql-dc1 --volumes-from mysqlstore --volumes-from jcrfilestore -itd <<docker image id/image name (in this case my-app/tc-mysql-dv-1)>

This run command instructs and configures the container to use volume containers for its persistence needs and they get persisted on host system that can be attached to newly created containers.

More on Docker: https://docs.docker.com/. Its Awesome!!!

Go Lang: Protecting endpoints with JWT

When we build an web application, one of the most basic need is to protect endpoints exposed by the web application. We have many tools/techniques/utilities to achieve this, in this post we will be exploring JSON Web Token (JWT) for Go.

Our sample web application exposes 2 endpoints:

/iAmProtected and /getMeToken (accessing this endpoint, client will receive jwt token, that our application will generate).

When a client accesses /iAmProtected without token it gets:

jwt1

To get the token client sends a POST request, like:

jwt2

Using the token by putting it in Authorization header of the request to access the protected endpoint:

jwt3

And in case of wrong token provided:

jwt4

And we captured these activities in log (server log), using logrus, the log looks like:


{"Method":"GET","RemoteAddress":"127.0.0.1:59246","RequestURI":"/iAmProtected","URL":"/iAmProtected","level":"error","msg":"Unauthorized access to this resource by theUser","time":"2017-04-23T16:09:41+05:30"}
{"Method":"POST","RemoteAddress":"127.0.0.1:59246","RequestURI":"/getMeToken","URL":"/getMeToken","level":"info","msg":"Token generated successfully.","time":"2017-04-23T16:10:59+05:30","userId":"theUser"}
{"Method":"GET","RemoteAddress":"127.0.0.1:59246","RequestURI":"/iAmProtected","URL":"/iAmProtected","level":"info","msg":"Access provided to protected resource","time":"2017-04-23T16:12:15+05:30"}
{"Method":"GET","RemoteAddress":"127.0.0.1:59246","RequestURI":"/iAmProtected","URL":"/iAmProtected","level":"error","msg":"Unauthorized access to this resource by theUser","time":"2017-04-23T16:13:00+05:30"}

So far we see how our sample web application behaves in different scenarios and we also observed that the said route protection is in place. Lets look at what all it takes to make it work like what we have seen so far.

Tooling:

logrusgo get github.com/Sirupsen/logrus

jwtgo get github.com/dgrijalva/jwt-go

negronigo get github.com/codegangsta/negroni

To prepare for creation of signed token by jwt, we will require our private and public keys. To generate it, use following commands –

openssl genrsa -out priKey.rsa 1024
openssl rsa -in priKey.rsa -pubout > pubKey.rsa.pub

In my case i have placed it under jwtKeys folder of my go application.

Lets have a look at the code:


package main

import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os"
"time"
"crypto/rsa"
log "github.com/Sirupsen/logrus"
"github.com/codegangsta/negroni"
jwt "github.com/dgrijalva/jwt-go"
"github.com/dgrijalva/jwt-go/request"
)
var logger *log.Entry
const logFilePath = "./log/logFile_"
const (
// File Path to key files used for signing JWT token
privKeyFile = "jwtKeys/priKey.rsa"
pubKeyFile = "jwtKeys/pubKey.rsa.pub"
)

var (
publicKey *rsa.PublicKey
privateKey *rsa.PrivateKey
)

func handleFatal(err error) {
if err != nil {
log.Fatalf("%+v", err)
}
}

func initializeKeys() {
privateKeyBytes, err := ioutil.ReadFile(privKeyFile)
handleFatal(err)
privateKey, err = jwt.ParseRSAPrivateKeyFromPEM(privateKeyBytes)
handleFatal(err)
publicKeyBytes, err := ioutil.ReadFile(pubKeyFile)
handleFatal(err)
publicKey, err = jwt.ParseRSAPublicKeyFromPEM(publicKeyBytes)
handleFatal(err)
}

// UserCredentials - will be used to store decoded post login data
type UserCredentials struct {
Username string `json:"username"`
Password string `json:"password"`
}

// Response - will be used to create Http response
type Response struct {
Data string `json:"data"`
}

// Token - will be used to create token as Http response for /login endpoint
type Token struct {
Token string `json:"token"`
}

func ConfigureNLaunchServer() {
// Un-Protected Endpoint(s), will be used by user to post credentials to receive token
http.HandleFunc("/getMeToken", TokenCreationHandler)
// Protected Endpoint(s), can be accessed only with JWT token as Authorization header
http.Handle("/iAmProtected", negroni.New(
negroni.HandlerFunc(ValidateTokenMiddleware),
negroni.Wrap(http.HandlerFunc(ProtectedResourceHandler)),
))
log.Fatalln(http.ListenAndServe(":8080", nil))
}

func main() {
log.SetFormatter(&log.JSONFormatter{})
t := time.Now()
logFileSuffix := fmt.Sprintf("%d%02d%02d%02d%02d%02d",
t.Year(), t.Month(), t.Day(),
t.Hour(), t.Minute(), t.Second())
logFilename := logFilePath + logFileSuffix + ".txt"
file, err := os.OpenFile(logFilename, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
defer file.Close()
if err == nil {
log.SetOutput(file)
} else {
log.Info("Failed to log to file, using default stderr")
}

initializeKeys()
ConfigureNLaunchServer()
}

func ProtectedResourceHandler(w http.ResponseWriter, r *http.Request) {
response := Response{"Hurrey!!!, I am in as i have JWT token with me."}
logger.Info("Access provided to protected resource")
WriteJsonResponse(response, w)
}

func TokenCreationHandler(w http.ResponseWriter, req *http.Request) {
logger = log.WithFields(log.Fields{
"URL": req.URL.Path,
"Method": req.Method,
"RemoteAddress": req.RemoteAddr,
"RequestURI": req.RequestURI,
})
var user UserCredentials
err := json.NewDecoder(req.Body).Decode(&user)
logger.Data["userId"] = user.Username
if err != nil {
logger.Errorf("%v", "Request has error")
http.Error(w, "Request has error", http.StatusForbidden)
return
}

if user.Password != user.Username+"!!" {
logger.Errorf("%v", "Invalid credentials")
http.Error(w, "Invalid credentials", http.StatusForbidden)
return
}

// preparing token, create a new one with given Signing Method,
// make claim and set claim to token
token := jwt.New(jwt.SigningMethodRS256)
claims := make(jwt.MapClaims)
// set expiry of the token
claims["exp"] = time.Now().Add(time.Minute * time.Duration(10)).Unix()
// set issued at timestamp
claims["iat"] = time.Now().Unix()
claims["username"] = user.Username
// set claims to the token
token.Claims = claims

if err != nil {
logger.Errorf("%v", "Error extracting the key")
http.Error(w, "Error extracting the key", http.StatusInternalServerError)
return
}

// now sign the token with private key, token is ready for distribution
tokenString, err := token.SignedString(privateKey)

if err != nil {
logger.Errorf("%v", "Error signing the token")
http.Error(w, "Error signing the token", http.StatusInternalServerError)
return
}
response := Token{tokenString}
logger.Info("Token generated successfully.")
WriteJsonResponse(response, w)
}

func ValidateTokenMiddleware(w http.ResponseWriter, req *http.Request, next http.HandlerFunc) {
logger = log.WithFields(log.Fields{
"URL": req.URL.Path,
"Method": req.Method,
"RemoteAddress": req.RemoteAddr,
"RequestURI": req.RequestURI,
})
// parse the token....
// keyFunc receives the parsed token and should return the key for validating, i.e. publicKey
token, err := request.ParseFromRequest(req, request.AuthorizationHeaderExtractor,
func(token *jwt.Token) (interface{}, error) {
return publicKey, nil
})

if err == nil {
if token.Valid {
next(w, req)
} else {
logger.Errorf("%v", "Invalid token")
http.Error(w, "Your token is invalid", http.StatusUnauthorized)

}
} else {
// to get a private claim like "username" in this case, <span style="color: #0000ff;">we need to type assert</span>
<span style="color: #0000ff;">// token.Claims to jwt.MapClaims</span> and then use it like a map and pick
// values based on private claim as key.
logger.Errorf("%v by %v", "Unauthorized access to this resource", token.Claims.(jwt.MapClaims)["username"])
http.Error(w, "Got a token? You need one to access to this resource.", http.StatusUnauthorized)
}
}

func WriteJsonResponse(response interface{}, w http.ResponseWriter) {
json, err := json.Marshal(response)
if err != nil {
http.Error(w, "Error creating response", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
w.Write(json)
}

We are using negroni to add middle ware processing (ValidateTokenMiddleware) to our protected route, i.e. ProtectedResourceHandler.

In initializeKeys function we are reading our public and private RSA key files in bytes and fetching private and public key accordingly for their further usages, like private key to sign the token and public key to verify the incoming token in Authorization header.

TokenCreationHandler uses jwt’s methods to create, prepare and sign the token (tried explaining the use in code comment) and then token is sent to user/client by WriteJsonResponse utility function. Setting up claims is an important part of creation of a JWT token, its the place where we set Expiry, Issued At, Issuer, Subject etc.

A JWT Token looks like:


eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE0OTI5NTE1MTQsImlhdCI6MTQ5Mjk1MDkxNCwidXNlcm5hbWUiOiJ0aGVVc2VyIn0.yb3KyRH_jkVAReN5DcTosgaPl-z_kI_wcQovaMUYu-KH9okqHSd_K2ZHFcg-HvOvTQ5y6_rVhYPbgSs7T9AMid0ITZjppKBMo1YuXpunMyA314n9JAMDDfHwai2aUhG-vMSAXo0Wwf9y2uErN-7SkqJdNce7IT2oHiEqvlhdTKQ

being composed of 3 sections separated by dot (.). 1st part is Header, 2nd is Claims and 3rd is Signature. The JSON Web Token is encoded data not encrypted data and it can be easily decoded. I tried here and got:

jwt5

 

There are usages of marshaling/unmarshaling of data between JSON and Go struct using Marshal and Decoder functions.

 

Securing endpoints the Go way with JWT-GO !!!

Go Lang: Handling errors and panics towards production readiness

Applications encounters errors (application errors) and panics (system errors) and they need to be coded in a way that they get handled usefully. To usefully, I mean, it should be captured with as much contextual information as possible and captured in such a way in log files that its easy to parse by systems/applications to gain some useful insight so that a preventive/corrective/predictive actions can be taken.

To get to what and how, lets take an example of a simple web application. Here is the code:


package main

import (
 "encoding/base64"
 "errors"
 "fmt"
 "log"
 "net/http"
 _ "net/http/pprof"
 "os"
 "strings"
)

var logFilePath = "./log/regLogFile.txt"

type panicHandler struct {
 http.Handler
}

type TStruct struct {
 cn string
 street string
}

func (h panicHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
h.Handler.ServeHTTP(w, r)
}

// Main function of our app. The entry point to our application.
// In this function we map requerst path to its handler.
func main() {
 file, err := os.OpenFile(logFilePath, os.O_CREATE|os.O_WRONLY, 0666)
 defer file.Close()
 if err == nil {
 log.SetOutput(file)
 } else {
 log.Println("Failed to log to file, using default stderr")
 }
 http.Handle("/panic", panicHandler{http.HandlerFunc(panicPathHandler)})
 http.HandleFunc("/", myHandleFunc)
 log.Fatalln(http.ListenAndServe("localhost:8080", nil))
}

func myHandleFunc(w http.ResponseWriter, req *http.Request) {
 w.Header().Set("Content-Type", "text/plain")
 path := req.URL.Path[1:]
 log.Printf("Request: %v\n", req)

if _, err := isAuthenticated(w, req); err != nil {
 log.Printf("Req processing status: %v\n", err)
 http.Error(w, err.Error(), http.StatusUnauthorized)
 return
 }
 log.Println("Processing Request")
 auth := strings.SplitN(req.Header["Authorization"][0], " ", 2)

payload, _ := base64.StdEncoding.DecodeString(auth[1])
 pair := strings.SplitN(string(payload), ":", 2)
 if strings.HasSuffix(path, "google.com") {
 fmt.Fprintf(w, "Hello Gopher, %s. Here is what you sent: %s %s\n", strings.TrimSuffix(path, "@google.com"), pair[0], pair[1])
 return
 }
 fmt.Fprintf(w, "Hello dear, %s. Here is what you sent: %s %s\n", path, pair[0], pair[1])
 return

}
func panicPathHandler(w http.ResponseWriter, req *http.Request) {
 log.Printf("Request: %v\n", req)
 //create a panic situation
 names := []string{"aname", "bname", "cname"}

m := make(map[string]map[string]TStruct, len(names))
 for _, name := range names {
 m["uid"][name] = TStruct{cn: "Chaithra", street: "dp road"}
 }
}

func Validate(username, password string) (bool, error) {

fmt.Println(username)
 fmt.Println(password)
 if password == username+"!!" {
 return true, nil
 }
 return false, errors.New("Invalid Credentials")
}

func isAuthenticated(w http.ResponseWriter, req *http.Request) (bool, error) {
 if req.Header["Authorization"] == nil {
 err := errors.New("Authorization missing")
 return false, err
 }
 auth := strings.SplitN(req.Header["Authorization"][0], " ", 2)

if len(auth) != 2 || auth[0] != "Basic" {
 err := errors.New("Bad Authorization Syntax")
 return false, err
 }

payload, _ := base64.StdEncoding.DecodeString(auth[1])
 pair := strings.SplitN(string(payload), ":", 2)

if len(pair) != 2 {
 err := errors.New("Username or Password is missing")
 return false, err
 }

if _, err := Validate(pair[0], pair[1]); err != nil {
 return false, errors.New("Credentials validation failed")
 }
 return true, nil

}

it exposes services endpoints like /{reqParam} and a special one /panic. For /panic it raises a panic that gets captured in log that got set to be written into a log file named ‘regLogFile.txt’. Here is the log captured for few hits to these endpoints:


2017/04/22 13:33:51 Request: &{GET /vikash111 HTTP/1.1 1 1 map[Accept-Encoding:[gzip, deflate] Cookie:[i18next=en] Connection:[keep-alive] User-Agent:[Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0] Accept:[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8] Accept-Language:[null]] 0x8bd080 0 [] false localhost:8080 map[] map[] <nil> map[] 127.0.0.1:56767 /vikash111 <nil> <nil> <nil> 0xc042126200}
2017/04/22 13:33:51 Req processing status: Authorization missing
2017/04/22 13:34:37 Request: &{GET /vikash111 HTTP/1.1 1 1 map[Connection:[keep-alive] User-Agent:[Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0] Accept:[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8] Accept-Language:[null] Accept-Encoding:[gzip, deflate] Authorization:[Basic YXVzZXI6YXVzZXIhITEx] Cookie:[i18next=en]] 0x8bd080 0 [] false localhost:8080 map[] map[] <nil> map[] 127.0.0.1:56767 /vikash111 <nil> <nil> <nil> 0xc04200a700}
2017/04/22 13:34:37 Req processing status: Credentials validation failed
2017/04/22 13:35:15 Request: &{GET /panic HTTP/1.1 1 1 map[Accept-Language:[null] Accept-Encoding:[gzip, deflate] Authorization:[Basic YXVzZXI6YXVzZXIhITEx] Cookie:[i18next=en] Connection:[keep-alive] User-Agent:[Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0] Accept:[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8]] 0x8bd080 0 [] false localhost:8080 map[] map[] <nil> map[] 127.0.0.1:56767 /panic <nil> <nil> <nil> 0xc04200a840}
2017/04/22 13:35:15 http: panic serving 127.0.0.1:56767: assignment to entry in nil map
goroutine 18 [running]:
net/http.(*conn).serve.func1(0xc042136000)
C:/Eee/Go/src/net/http/server.go:1491 +0x131
panic(0x6e78e0, 0xc042005480)
C:/Eee/Go/src/runtime/panic.go:458 +0x251
main.panicPathHandler(0x89dce0, 0xc042114270, 0xc0420b41e0)
C:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server/main.go:80 +0x225
net/http.HandlerFunc.ServeHTTP(0x769368, 0x89dce0, 0xc042114270, 0xc0420b41e0)
C:/Eee/Go/src/net/http/server.go:1726 +0x4b
main.panicHandler.ServeHTTP(0x89af20, 0x769368, 0x89dce0, 0xc042114270, 0xc0420b41e0)
C:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server/main.go:30 +0x56
main.(*panicHandler).ServeHTTP(0xc042005020, 0x89dce0, 0xc042114270, 0xc0420b41e0)
<autogenerated>:1 +0x80
net/http.(*ServeMux).ServeHTTP(0x8bd7a0, 0x89dce0, 0xc042114270, 0xc0420b41e0)
C:/Eee/Go/src/net/http/server.go:2022 +0x86
net/http.serverHandler.ServeHTTP(0xc042064280, 0x89dce0, 0xc042114270, 0xc0420b41e0)
C:/Eee/Go/src/net/http/server.go:2202 +0x84
net/http.(*conn).serve(0xc042136000, 0x89e3a0, 0xc042126140)
C:/Eee/Go/src/net/http/server.go:1579 +0x4be
created by net/http.(*Server).Serve
C:/Eee/Go/src/net/http/server.go:2293 +0x454
2017/04/22 13:35:15 Request: &{GET /panic HTTP/1.1 1 1 map[Accept-Encoding:[gzip, deflate] Authorization:[Basic YXVzZXI6YXVzZXIhITEx] Cookie:[i18next=en] Connection:[keep-alive] User-Agent:[Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0] Accept:[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8] Accept-Language:[null]] 0x8bd080 0 [] false localhost:8080 map[] map[] <nil> map[] 127.0.0.1:56800 /panic <nil> <nil> <nil> 0xc04200a900}
2017/04/22 13:35:15 http: panic serving 127.0.0.1:56800: assignment to entry in nil map
goroutine 7 [running]:
net/http.(*conn).serve.func1(0xc042064580)
C:/Eee/Go/src/net/http/server.go:1491 +0x131
panic(0x6e78e0, 0xc042005690)
C:/Eee/Go/src/runtime/panic.go:458 +0x251
main.panicPathHandler(0x89dce0, 0xc042114340, 0xc0420b43c0)
C:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server/main.go:80 +0x225
net/http.HandlerFunc.ServeHTTP(0x769368, 0x89dce0, 0xc042114340, 0xc0420b43c0)
C:/Eee/Go/src/net/http/server.go:1726 +0x4b
main.panicHandler.ServeHTTP(0x89af20, 0x769368, 0x89dce0, 0xc042114340, 0xc0420b43c0)
C:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server/main.go:30 +0x56
main.(*panicHandler).ServeHTTP(0xc042005020, 0x89dce0, 0xc042114340, 0xc0420b43c0)
<autogenerated>:1 +0x80
net/http.(*ServeMux).ServeHTTP(0x8bd7a0, 0x89dce0, 0xc042114340, 0xc0420b43c0)
C:/Eee/Go/src/net/http/server.go:2022 +0x86
net/http.serverHandler.ServeHTTP(0xc042064280, 0x89dce0, 0xc042114340, 0xc0420b43c0)
C:/Eee/Go/src/net/http/server.go:2202 +0x84
net/http.(*conn).serve(0xc042064580, 0x89e3a0, 0xc04200a880)
C:/Eee/Go/src/net/http/server.go:1579 +0x4be
created by net/http.(*Server).Serve
C:/Eee/Go/src/net/http/server.go:2293 +0x454
2017/04/22 13:35:15 Request: &{GET /panic HTTP/1.1 1 1 map[Accept:[text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8] Accept-Language:[null] Accept-Encoding:[gzip, deflate] Authorization:[Basic YXVzZXI6YXVzZXIhITEx] Cookie:[i18next=en] Connection:[keep-alive] User-Agent:[Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0]] 0x8bd080 0 [] false localhost:8080 map[] map[] <nil> map[] 127.0.0.1:56801 /panic <nil> <nil> <nil> 0xc04200aa00}
2017/04/22 13:35:15 http: panic serving 127.0.0.1:56801: assignment to entry in nil map
goroutine 19 [running]:
net/http.(*conn).serve.func1(0xc042136180)
C:/Eee/Go/src/net/http/server.go:1491 +0x131
panic(0x6e78e0, 0xc042005890)
C:/Eee/Go/src/runtime/panic.go:458 +0x251
main.panicPathHandler(0x89dce0, 0xc042114410, 0xc0420b45a0)
C:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server/main.go:80 +0x225
net/http.HandlerFunc.ServeHTTP(0x769368, 0x89dce0, 0xc042114410, 0xc0420b45a0)
C:/Eee/Go/src/net/http/server.go:1726 +0x4b
main.panicHandler.ServeHTTP(0x89af20, 0x769368, 0x89dce0, 0xc042114410, 0xc0420b45a0)
C:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server/main.go:30 +0x56
main.(*panicHandler).ServeHTTP(0xc042005020, 0x89dce0, 0xc042114410, 0xc0420b45a0)
<autogenerated>:1 +0x80
net/http.(*ServeMux).ServeHTTP(0x8bd7a0, 0x89dce0, 0xc042114410, 0xc0420b45a0)
C:/Eee/Go/src/net/http/server.go:2022 +0x86
net/http.serverHandler.ServeHTTP(0xc042064280, 0x89dce0, 0xc042114410, 0xc0420b45a0)
C:/Eee/Go/src/net/http/server.go:2202 +0x84
net/http.(*conn).serve(0xc042136180, 0x89e3a0, 0xc04200a980)
C:/Eee/Go/src/net/http/server.go:1579 +0x4be
created by net/http.(*Server).Serve
C:/Eee/Go/src/net/http/server.go:2293 +0x454

 

The log captures enough information, but there are few issues with this log. It will be very tricky to get some useful information out of it, good to be read by human, but barely by machine. The context is also not sufficiently captured, at each instance we are creating a new Error and error chain (if any) is getting missed out, meaning we are losing error context.

Lets attempt to modify this very same application with an aim to attach a context to error(s), log it in a format (like JSON) for easier machine parsing and finally report panic to some external systems like Bugsnag for making these errors/panics go through a proper error handling life cycle.

More context to errors/panics == ease in debugging the issue

So, time to identify and collect required tools, below are list of tools we will require:

  • logrusgo get github.com/Sirupsen/logrus
  • errorsgo get github.com/pkg/errors
  • bugsnag – github.com/bugsnag/bugsnag-go

With the help of these tools the modified application code is:


package main

import (
"encoding/base64"
"fmt"
"net/http"
_ "net/http/pprof"
"os"
"strings"
"time"

log "github.com/Sirupsen/logrus"
bugsnag "github.com/bugsnag/bugsnag-go"
"github.com/pkg/errors"
)

var logger *log.Entry
var logFilePath = "./log/logFile_"

type TStruct struct {
cn string
street string
}

type panicHandler struct {
http.Handler
}

func (h panicHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
go func() {

//passing on to bugsnag Recover
defer bugsnag.Recover()
}()
h.Handler.ServeHTTP(w, r)
}

// Main function of our app. The entry point to our application.
// In this function we map requerst path to its handler.
func main() {

bugsnag.Configure(bugsnag.Configuration{
APIKey: "Your Bugsnag API Key",
ReleaseStage: "trail",
// more configuration options
})
log.SetFormatter(&log.JSONFormatter{})

t := time.Now()
logFileSuffix := fmt.Sprintf("%d%02d%02d%02d%02d%02d",
t.Year(), t.Month(), t.Day(),
t.Hour(), t.Minute(), t.Second())
logFilename := logFilePath + logFileSuffix + ".txt"
file, err := os.OpenFile(logFilename, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
defer file.Close()
if err == nil {
log.SetOutput(file)
} else {
log.Info("Failed to log to file, using default stderr")
}

http.Handle("/panic", panicHandler{http.HandlerFunc(panicPathHandler)})
http.HandleFunc("/", myHandleFunc)

err = http.ListenAndServe("localhost:9090", bugsnag.Handler(nil))

if err != nil {
fmt.Printf("%+v\n", err)
return
}

}
func init() {

}

func myHandleFunc(w http.ResponseWriter, req *http.Request) {

logger = log.WithFields(log.Fields{

"URL": req.URL.Path,
"Method": req.Method,
"RemoteAddress": req.RemoteAddr,
"RequestURI": req.RequestURI,
})
logger.Info("Processing Request")
w.Header().Set("Content-Type", "text/plain")
path := req.URL.Path[1:]

_, err := isAuthenticated(req)
if err != nil {
err = errors.Wrap(err, "Authentication Failed")
logger.Errorf("%v", err)
logger.Errorf("%+v", err)
bugsnag.Notify(err,
bugsnag.SeverityError)

http.Error(w, errors.Cause(err).Error(), http.StatusUnauthorized)
return
}
auth := strings.SplitN(req.Header["Authorization"][0], " ", 2)
payload, _ := base64.StdEncoding.DecodeString(auth[1])
pair := strings.SplitN(string(payload), ":", 2)
if strings.HasSuffix(path, "google.com") {
fmt.Fprintf(w, "Hello Gopher, %s. Here is what you sent: %s %s\n", strings.TrimSuffix(path, "@google.com"), pair[0], pair[1])
return
}
fmt.Fprintf(w, "Hello dear, %s. Here is what you sent: %s %s\n", path, pair[0], pair[1])

}
func panicPathHandler(w http.ResponseWriter, req *http.Request) {

//create a panic situation
names := []string{"aname", "bname", "cname"}

m := make(map[string]map[string]TStruct, len(names))
for _, name := range names {
m["uid"][name] = TStruct{cn: "Chaithra", street: "dp road"}
}
}

func Validate(username, password string, err error) (bool, error) {
var errL error
if password == username+"!!" {
fmt.Println("U and P matched")
return true, nil
}
fmt.Println("U and P mismatched")
// adding additional field to capture in log statement.
// userId as identifier to the captured error
logger.Data["userId"] = username
if err == nil {
errL = errors.New("Invalid Credentials")
} else {
errL = errors.Wrap(err, "Invalid Credentials")
}
return false, errL

}

func isAuthenticated(req *http.Request) (bool, error) {
fmt.Println("isAuthenticated")
var err error
if req.Header["Authorization"] == nil {
err = errors.New("Authorization missing")
return false, err
}
auth := strings.SplitN(req.Header["Authorization"][0], " ", 2)

if len(auth) != 2 || auth[0] != "Basic" {
err = errors.Wrap(err, "Bad Authorization Syntax")
return false, err
}

payload, _ := base64.StdEncoding.DecodeString(auth[1])
pair := strings.SplitN(string(payload), ":", 2)

if len(pair) != 2 {
err = errors.Wrap(err, "Username or Password is missing")
return false, err
}

if _, err1 := Validate(pair[0], pair[1], err); err1 != nil {
fmt.Println("U and P didn't match")
return false, err1
}
return true, nil
}

The log that we get now is:


{"Method":"GET","RemoteAddress":"127.0.0.1:52668","RequestURI":"/tina","URL":"/tina","level":"info","msg":"Processing Request","time":"2017-04-22T15:15:48+05:30"}
{"Method":"GET","RemoteAddress":"127.0.0.1:52668","RequestURI":"/tina","URL":"/tina","level":"error","msg":"Authentication Failed: Invalid Credentials","time":"2017-04-22T15:15:48+05:30","userId":"auser"}
{"Method":"GET","RemoteAddress":"127.0.0.1:52668","RequestURI":"/tina","URL":"/tina","level":"error","msg":"Invalid Credentials\nmain.Validate\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:138\nmain.isAuthenticated\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:168\nmain.myHandleFunc\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:95\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.(*ServeMux).ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2022\ngithub.com/bugsnag/bugsnag-go.Handler.func1\n\tC:/Eee/GoWorkspace/src/github.com/bugsnag/bugsnag-go/bugsnag.go:88\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.serverHandler.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2202\nnet/http.(*conn).serve\n\tC:/Eee/Go/src/net/http/server.go:1579\nruntime.goexit\n\tC:/Eee/Go/src/runtime/asm_amd64.s:2086\nAuthentication Failed\nmain.myHandleFunc\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:97\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.(*ServeMux).ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2022\ngithub.com/bugsnag/bugsnag-go.Handler.func1\n\tC:/Eee/GoWorkspace/src/github.com/bugsnag/bugsnag-go/bugsnag.go:88\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.serverHandler.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2202\nnet/http.(*conn).serve\n\tC:/Eee/Go/src/net/http/server.go:1579\nruntime.goexit\n\tC:/Eee/Go/src/runtime/asm_amd64.s:2086","time":"2017-04-22T15:15:48+05:30","userId":"auser"}
{"Method":"GET","RemoteAddress":"127.0.0.1:52668","RequestURI":"/tina","URL":"/tina","level":"info","msg":"Processing Request","time":"2017-04-22T15:15:48+05:30"}
{"Method":"GET","RemoteAddress":"127.0.0.1:52668","RequestURI":"/tina","URL":"/tina","level":"error","msg":"Authentication Failed: Invalid Credentials","time":"2017-04-22T15:15:48+05:30","userId":"auser"}
{"Method":"GET","RemoteAddress":"127.0.0.1:52668","RequestURI":"/tina","URL":"/tina","level":"error","msg":"Invalid Credentials\nmain.Validate\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:138\nmain.isAuthenticated\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:168\nmain.myHandleFunc\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:95\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.(*ServeMux).ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2022\ngithub.com/bugsnag/bugsnag-go.Handler.func1\n\tC:/Eee/GoWorkspace/src/github.com/bugsnag/bugsnag-go/bugsnag.go:88\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.serverHandler.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2202\nnet/http.(*conn).serve\n\tC:/Eee/Go/src/net/http/server.go:1579\nruntime.goexit\n\tC:/Eee/Go/src/runtime/asm_amd64.s:2086\nAuthentication Failed\nmain.myHandleFunc\n\tC:/Eee/GoWorkspace/src/github.com/vikash1976/go-tooling/https-server-prod/main.go:97\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.(*ServeMux).ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2022\ngithub.com/bugsnag/bugsnag-go.Handler.func1\n\tC:/Eee/GoWorkspace/src/github.com/bugsnag/bugsnag-go/bugsnag.go:88\nnet/http.HandlerFunc.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:1726\nnet/http.serverHandler.ServeHTTP\n\tC:/Eee/Go/src/net/http/server.go:2202\nnet/http.(*conn).serve\n\tC:/Eee/Go/src/net/http/server.go:1579\nruntime.goexit\n\tC:/Eee/Go/src/runtime/asm_amd64.s:2086","time":"2017-04-22T15:15:48+05:30","userId":"auser"}

This log is in JSON format, easily parse-able for insights. The error is with context – Authentication Failed, because of, Invalid Credentials , for userId “auser”e.g.

{“Method”:”GET”,”RemoteAddress”:”127.0.0.1:52668″,”RequestURI”:”/tina”,”URL”:”/tina”,”level”:”error”,”msg”:”Authentication Failed: Invalid Credentials“,”time”:”2017-04-22T15:15:48+05:30″,”userId”:”auser”}

To add context to error(s), we are using ‘errors’ package and its functions like:

New to create a new error, Wrap to attach further context to the error and Cause to fetch original error. Further reading on errors package.

To log the errors we are using logrus, we can set the formatting with SetFormatter function. We can attach fields/context to log lines with WithFields function and can add additional context to specific log lines like-


// adding additional field to capture in log statement.
// userId as identifier to the captured error

logger.Data["userId"] = username

Further reading on logrus.

To post panics we are using Bugsnag, for that we need to set it up like (line# 42 to 46):


bugsnag.Configure(bugsnag.Configuration{
APIKey: "Your Bugsnag API Key",
ReleaseStage: "trail",
// more configuration options
})

and in our panic handler (line# 29 – 36), we need:


func (h panicHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
go func() {

//passing on to bugsnag Recover
defer bugsnag.Recover()
}()
h.Handler.ServeHTTP(w, r)
}

We can notify errors to Bugsnag with following code (line# 95):


bugsnag.Notify(err, bugsnag.SeverityError)

This is what we get in Bugsnag console for our project:

Dashboard:

BS1

Panic:

BS2

Error:

BS3

 

After signing up to Bugsnag, create a project for Go’s net/http, read it here to familiarize yourself about its API usage Bugsnag Go net-http

Instances of errors and panics when captured properly provides very useful insights!

Go Lang Channels – cool axioms and how we take it to our advantages

Channels are one of the most useful addition to Go and a core ingredient for Go concurrency. In simplest form: Channels are the pipes that connect concurrent goroutines. We can send values into channels from one goroutine and receive those values into another goroutine. By default sends and receives block until both the sender and receiver are ready.

Channels could be buffered and unbuffered. By default channels are unbuffered, meaning that they will only accept sends (chan <-) if there is a corresponding receive () ready to receive the sent value. Buffered channels accept a limited number of values without a corresponding receiver for those values.


//Buffered Channel - channel with fixed buffer size of 2

c := make (chan string, 2)

//Unbuffered Channel - channel with no fixed buffer size

c := make (chan string)

Let’s start the adventure to know about channels properties and how we can take advantage of it.

package main

import "fmt"

func main() {

//this one is a buffered channel
c := make(chan string, 2)

//sending 2 messages onto this channel
c <- "Hey"
c <- "How's it going?"

// we are receiving these two values as usual.
fmt.Println(<-c)
fmt.Println(<-c)
}

When we execute it, we get:

Hey
How's it going?

Let’s change the code a little:

package main

import "fmt"

func main() {

//this one is an unbuffered channel
c := make(chan string)

//sending 2 messages onto this channel
c <- "Hey"
c <- "How's it going?"

// we are receiving these two values as usual.
fmt.Println(<-c)
fmt.Println(<-c)
}

Now lets see what we get when we execute it:

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send]:
main.main()
	/tmp/sandbox036932895/main.go:10 +0x80

What, just by changing its type from buffered to unbuffered, it lands into a deadlock!.

Yes, because – “By default channels are unbuffered, meaning that they will only accept sends (chan <-) if there is a corresponding receive () ready to receive the sent value.

So in this case ‘c‘ is an unbuffered channel and we are attempting to send onto it, but there is no receiver of this channel. We can fix it by either making the channel buffered or registering a receiver on this channel in another go routine, like:

package main

import "fmt"

func main() {

//this one is an unbuffered channel
c := make(chan string)

go func() {
fmt.Println(<-c)
}
//sending messages onto this channel
c <- "Hey, How's it going?"
}

And we get:

Hey, How's it going?

So we learnt that sending on an unbuffered channel blocks. OK, how about this code?


package main
import "fmt"
func main() {
var chan c string //this is an uninitialized channel, i.e. nil channel
//sending onto a nil channel
c <- "Hey, How's it going?"
}

This also complains of deadlock.

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send (nil chan)]:
main.main()
	/tmp/sandbox334594764/main.go:6 +0x60

A bit of change again:


package main
import "fmt"
func main() {
var chan c string //this is an uninitialized channel, i.e. nil channel
//reading from a nil channel
fmt.Println(<-c)
}

This also complains of deadlock.

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send (nil chan)]:
main.main()
	/tmp/sandbox334594764/main.go:6 +0x60

Another learning is that sending/receiving to a nil channel is blocking.

Can we work this one out like this?


package main

import "fmt"

func main() {
var c chan string
go func() {
fmt.Println(<-c)
}()
c <- "let's get started"
}

Lets execute and we get this:

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send (nil chan)]:
main.main()
	/tmp/sandbox749673214/main.go:10 +0x80

goroutine 5 [chan receive (nil chan)]:
main.main.func1(0x0, 0x434080)
	/tmp/sandbox749673214/main.go:8 +0x40
created by main.main
	/tmp/sandbox749673214/main.go:9 +0x40

Meaning this can’t be worked out the way we worked out the unbuffered one. Can this fact that “a nil channel always blocks” be used for our advantage. Let say we have 5 channels that we would like to wait to get closed, the code would look like:


package main

import (
"fmt"
"time"
)

func WaitForClosingOut(c1, c2, c3 chan bool) {
for c1 != nil || c2 != nil || c3 != nil {

select {
case <-c1:
fmt.Printf("closing %v channel\n", c1)
c1 = nil
case <-c2:
fmt.Printf("closing %v channel\n", c2)
c2 = nil
case <-c3:
fmt.Printf("closing %v channel\n", c3)
c3 = nil
}
}
}

func main() {
c1, c2, c3 := make(chan bool), make(chan bool), make(chan bool)

t0 := time.Now()

go func() {
fmt.Println("Channels closing routine...")
close(c1)
close(c2)
close(c3)
}()
WaitForClosingOut(c1, c2, c3)
fmt.Printf("Channels took %v to get closed\n", time.Since(t0))
}

Execution results in:

chnl1.JPG

But what is the point, here!, it looks good, what am I suggesting here. To go to the point, lets comment out line# 14 (c1 = nil). We will see an infinite loop, but why??? Because a closed channel immediately get ready to receive, receive what, the zero value of the channel value, in this case “false” – the default value of bool type. So what…. so every time the select will pick 1st case i.e. case

When a nil channel is part of a select statement, it is effectively ignored, so niling c1 removes it from selection, leaving only c2 and c3 which blocks until it is closed, exiting the loop without spinning indefinitely.

Knowing that a closed channel is always ready to receive, we can implement patterns like signalling multiple unknown number go routines without knowing anything about them, at the same time ensuring no deadlock.


package main

import (

"fmt"

"time"

"sync"
)

func main() {
finish := make(chan struct{})
var done sync.WaitGroup

for i := 0; i < n; i++ {

done.Add(1)
go func() {
select {
case <-time.After(15 * time.Minute):
case <-finish:
}
done.Done()
}()
}
t0 := time.Now()
close(finish)
done.Wait()
fmt.Printf("Goroutine took %v to stop\n", time.Since(t0))
}

By declaring finish to be of type chan struct{} we’re saying that we are only interested in its closed property not the value sent or received over it, as struct{} suggest no value.

One more property of channel that we often come across – A send to a closed channel panics.


package main

import (
"fmt"
)

func main() {
var c = make(chan int, 25)
for i := 0; i < 5; i++ {
go func() {
for j := 0; j < 5; j++ {
c <- j
}
close(c)
}()
}
for i := range c {
fmt.Println(i)
}
}

Executing this:

chnl2.JPG

We can fix it by following pattern explained at Here. See a simpler attempt to solve it below:


package main

import (
"fmt"
"sync"
)

func main() {
var wg sync.WaitGroup

var c = make(chan int, 25)
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
for j := 0; j < 5; j++ {
c <- j
}
fmt.Printf("Done %d\n", i)
wg.Done()
}()
}

go func() {
fmt.Println("Waiting to write")
for i := range c {
fmt.Println(i)
}
}()
wg.Wait()
close(c)
for i := range c {
fmt.Println(i)
}
fmt.Println("Exiting...")

}

The execution result is:

chnl3

Have a great time being Concurrent with Channels and select!!!

References:

Dave’s blogs @ https://dave.cheney.net/