Aug 29, 2019

Terraform

Terraform is an open-source infrastructure as a code (hashicorp configuration language) software tool created by HashiCorp. Similar to CloudFormation, terraform is designed as a provisioning tool, which is different than tools like chef and puppet, which are primarily designed for configuration management. Definitely, there is some overlap between configuration management tools and provisioning tools, but it's important to understand what they are best at. Terraform and CloudFormation follows declarative state language, which means you specify end state and tool will figure out sequence or dependencies of the task. Tools like ansible are procedural-based automation tools, where you write code and specify the sequence of steps to achieve end state. 

The main purpose of the Terraform language is declaring resources. All other feature exists to make the definition of resources more flexible and convenient. A Terraform configuration consists of a root module, where evaluation begins, along with a tree of child modules created when one module calls another.

Terraform Components


variable

Input variables serve as parameters for a Terraform module. When you declare variables in the root module of your configuration, you can set their values using CLI options (-var="my_var=something"), or in .tfvars file or in environment variables. When you declare them in child modules, the calling module should pass values in the module block. The value assigned to a variable can be accessed (var.my_var) only from expressions within the module where it was declared. Ex string, number, bool, list, map, set

provider

Provider (like aws or google) requires configuration of their own, like authentication settings, etc. You can have multiple providers give them an alias and reference it by alias name (<PROVIDER NAME>.<ALIAS>). Every time a new provider is added, you need to download it by initializing it.

resource

Resource block describes one or more infrastructure objects. Each resource is associated with a single resource type. Terraform handles most resource dependencies automatically but in the rare cases where you need to explicitly define you can use depends_on metadata argument. You can use a count meta argument to create multiple resources. You can add Provisioners to any resource and are used to execute scripts on a local or remote machine

output

This is like the return of a module that can be used in the child module or for printing from the root module.

module 

This is a container for multiple resources while it helps reusability of the code. Every Terraform configuration has at least a root module, which consists of the resources defined in the .tf files in the main working directory.

data

Data sources allow fetching data defined outside of Terraform. One of the very common use case of this is getting aws AZ

data "aws_availability_zones" "available" {
  state = "available"
}

In the above example, we are storing aws AZs list which can be accessed anywhere in the code as follow

"${data.aws_availability_zones.available.names[0]}"

State

Terraform state can be considered some sort of database to map Terrform config to the real world. This also tracks resource dependencies.  This can be saved locally or remotely and location can be configured by -state. It locks state while applying for all operations that could write state.


Terraform Console

This provides an interactive way to evaluate an expression. This becomes very handy to test with build-in function and also with interpolation. Just type terraform console on the command prompt and then play with it.

Apr 28, 2019

ES6 New Fetures

let and const

Hoisting is JavaScript's default behavior of moving all declarations to the top of the current scope, which means following is valid
 x=5
 var x
 y === 7 //false, as declaration is hoisted but not the initializations
 var y = 7;

JavaScript only hoists declarations, not initializations.
var is function scoped which sometimes is confusing. Refer following example

 if(true){
   var x = 20
 }
 console.log(x) //this will log 20

Even though x is declared inside if block, but since var declaration are hoisted it will be accessible anywhere within the function.

In ES6, let and const was introduced which is blocked scoped and is align with other programmable language like c#, java. In the above example, if you change var to let, it will throw error as you try to access x outside if block where its declared.

Arrow function

This is defined as anonymous function much like lambda function in c#

Destructuring Assignment

This make it possible to unpack values from array, or properties from objects, into distinct variables.

[a, b, ...rest] = [10, 20, 30, 40, 50];//a=10,b=20,rest=[30,40,50]

const o = {p: 42, q: true};
const {p: foo, q: bar} = o;

function someMethod({p: foo, q: bar})

This can be used even for nested object.

Rest parameter give you access to remaining items in the from of array which makes it lot easier to run any of array method on that.

Spread syntax allows an iterable such as an array expression or string to be expanded in places where zero or more arguments (for function calls) or elements (for array literals) are expected, or an object expression to be expanded in places where zero or more key-value pairs (for object literals) are expected
var arr2 = [...arr]; // like arr.slice()
var concat = [...arr1, ...arr2];

var clonedObj = { ...obj1 };
var mergedObj = { ...obj1, ...obj2 };

Template literals

`Hello ${name} !!`

Can also be used with multi line

ES6 Module

ES6 provides build-in module in javascript, earlier to this you have to use library like commonjs, amd


ES6 Class


Mar 31, 2019

.net garbage collection

The garbage collector serves as an automatic memory manager. After the garbage collector is initialized by the CLR, it allocates a segment of memory to store and manage objects. When a garbage collection is triggered, the garbage collector reclaims the memory (managed memory) that is occupied by dead objects. The reclaiming process compacts live objects so that they are moved together, and the dead space is removed, thereby making the heap smaller.

If your managed objects reference unmanaged objects, you will have to explicitly free them (unmanaged objects). .NET does not allocate Unmanaged memory, as it's coming from outside sources and thus the GC doesn't know about it.  For example, when you open a file (FileStream) you are basically calling (behind the scenes) the CreateFile unmanaged Win32 function. This function allocates an unmanaged file handle directly from the file system. .NET and the GC has strictly no way of tracking this unmanaged object and everything it does.

To handle this you have IDisposable and Dispose, which you will implement in your managed object, where you will clean up your unmanaged memory (if any created by your object) and implement the finalizer to call Dispose(). The finalizer is just a safeguard, it will make sure that those resources get cleaned up eventually if the caller forgets to dispose of your class properly.

.Net provides using statement, which is a convenient syntax that ensures that Dispose is called (even if an exception occurs in using block) as soon as execution reaches close brace.

Garbage collection occurs when the system has low physical memory or the memory used by allocated objects on the managed heap surpasses an acceptable threshold. You also have an option of calling GC.Collect, but most likely you do not have to call this, it's only in some unique situation you will be using this function.

The heap is organized into generations so it can handle long-lived or short-lived object. Refer following for more detail. 

Oct 28, 2018

Angular - Best Practices

Using Immutability

Generally speaking, immutability is a good practice in JavaScript whether we are using Angular or not, so the recommendation to prefer immutability is a good recommendation in all JavaScript code. In general, do not mutate existing objects in memory, but rather create new objects. Using immutability can help you avoid certain classes of bugs, such as bugs that occur when a value is unexpectedly changed from somewhere else in the code. Using immutability can also help with certain types of change detection in Angular. Refer following for safe way of deep copying an object.
function cloneObject(obj) {
    var clone = {};
    for(var i in obj) {
        if(obj[i] != null &&  typeof(obj[i])=="object")
            clone[i] = cloneObject(obj[i]);
        else
            clone[i] = obj[i];
    }
    return clone;
}
let newObj = JSON.parse(JSON.stringify(obj));

copy via object assign

Following will do shallow copy, left most is target. value in right will override argument in left.
Object.assign({},employee,{name:'New Name'}, {age:10})

copy via spread

const copyObject = {...objtobecloned, {name:'New Name'}}
const copyArray = [...arrayToBeCopied, ]

map, filter, reduce, concat, spread mutate the array where as push, pop, reverse are immutable.

You can also use library like Immer.

Callback hell

When writing callback keep an eye on a number of curly braces as it gets difficult to manage if you have deeply nested callback. It's sometimes neat to put callback (if it's getting bigger) in a separate function.

Prefixing Component Selectors

Prefixing your component selectors will avoid conflicts if you happen to import a module that has a component selector that conflicts with one of your own.

Delegating Complex Logic to Services

Inside component, you mostly initialize form controls, add validators, redirect logic, event handler, etc, any business logic or complex processing should be in service. On the other hand, service should not have to deal with form controls objects.

Code Organization

Properties at the top followed by the constructor, followed by interface implementation, followed by the component public method and at the end private method. Implement Life Cycle Hook Interfaces like OnInit, OnChange.

Service Injector Best Practices


For every Angular app, Angular creates a root injector, which is responsible for injecting services wherever they're needed. If you provide a service in the eagerly loaded module, then angular register that with the root injector which makes it (singleton) available to the entire application. On the other hand, if you provide a service in a lazily-loaded feature module, Angular creates a new injector for that module and registers it there. So the instance of the service is only available to that lazily-loaded feature module. This behavior of creating a second instance is unique to lazily-loaded feature modules. Just remember that if you provide a service in a lazily-loaded module, it is only available to that module, and if you need a single instance of a service to be available everywhere, define it in your core module.

Ahead-of-time Compilation and tree shaking CLI

Use cli to build

Lazy Loading Feature Modules

Modules are not downloaded at the launch of the application rather only eagerly loaded module is downloaded. There are three type -
 Lazy Loading - Module is downloaded only when the user navigates to lazily loaded feature.
 Preloading - At the launch of application only eagerly loaded module is downloaded and hence user is severed quickly. Once downloaded and template appears, router checks for preloaded module and download them. This way
 application launch becomes fast as at that time user is downloading only eagerly loaded module and once user is served, angular downloads feature module, so its available for user when user try's to navigate to it.
 Custome Preloading - You can put custom rule to defined preloading behavior.

Monitoring Bundle Sizes


It's very important to keep an eye on the bundle size of all the chunks which gets created during build. You can use source-map-explorer to peek into your bundle to see whats taking most of the space. Ultimately user will be downloading these bundles and smaller the bundle size faster it would be to download
 ng build --prod --source-map

Improving Performance with OnPush Change Detection

Application state change can be caused by three things - Events (click, submit, ), XHR (http call), Timer (setTimeout, setInterval). These are the asynchronous operations which triggers change detection. So when change detection is triggered each of bindings will be evaluated, which can cause performance concern on a page which has lots of binding. So if you don't expect any of your binding to get impacted then you may add OnPush so that bindings are not re-evaluated on events, xhr or timer. To test add following <div>{{callMethod()}}</div> to your template and you will notice that callMethod will be called everytime an event is fired on the component or http call happens.

Pure and Impure Pipe Performance

A pure pipe is only called when Angular detects a change in the value or the parameters passed to a pipe. An impure pipe is called for every change detection cycle no matter whether the value or parameters changes. So in case of pure pipe, when you pass an array and one of the element of array changes, the pure pipe will not be called as the instance remain the same. So if you have sort custom pipe (pure) which takes an array, the pipe will not be executed if you update one of the element of the array

Oct 12, 2018

Angular - Router Guard

Angular router provides several types of guards. Guard is an angular injectable service which you register as a provider and use it when you configure a route. You also have an option to create it just like a function, but typically in production code, you will have a service. Following is the order in which guards are executed and if any guard returns false, all pending guards are canceled, and the requested navigation is canceled.
  • canDeactivate - The router first executes the canDeactivate guards for the current route to determine whether the user can leave that route.
  • canLoad - If a feature module is loaded asynchronously, the canLoad route guard is checked before the module is loaded; it won't even be downloaded unless the guard requirements are met. Unlike the canActivate, the canLoad method cannot access the ActivatedRouteSnapshot or RouterStateSnapshot because the module defining the route is not yet loaded.
  • canActivateChild - This is to guard activation to a child route to protect child route
  • canActivate - Guard to protect the route.
  • resolve - After all other route guards are checked, the resolvers are executed, so you resolve only when you have access to the route. This is typically used to fetches data before the component is activated. This way component doesn't have to show partial data or put special logic to work around that while data is being downloaded from the server.

The router extracts any route parameters from the URL and supplies them to the component through its ActivatedRoute service. It provides access to URL segments, route parameters, query parameters, route data and even access to the parent route. These are exposed as observables so you can subscribe which will notify of any parameter change. A controller onInit is only called once when a component is initialized, meaning route parameter change (navigating from app/1 to app/2 in cased of route like app/:id) will not call onInit, so to make sure you update your component based on id in the browser, you get hold of the ActivatedRoute and then subscribe to param change. On the other hand routeguard will be called every time, even when route parameter change and it will be passed with ActivatedRouteSnapshot, which has same properties as ActivatedRouty as plain values, while ActivatedRoute exposes them as observables.

Oct 5, 2018

Angular 2 - Show spinner using Router Resolvers and Event before activating route

The idea here is that we will create a route resolver and then we will subscribe to the router event and on NavigationStart event we will start the spinner and on NavigationEnd and other related events we will stop the spinner. This way new component is not activated until data is fetched to render that component, and until data is being fetched user will be displayed with the spinner.

Route resolver is a service which imports Resolve interface from ‘@angular/router’. Here you implement the resolve method which has two parameters - ActivatedRouteSnapshot and RouterStateSnapshot. This method returns observable of an object which you want to return. In your route configuration, you activate this by adding resolve. Through ActivatedRouteSnapshot, you can get hold of all route-related data for the route being activated, like parameters, query string, etc. Now at this place, you can call your service to interact with the server, before the new component is activated. This way you can handle any error which server may return as well your component is not partially displayed.

To implement spinner, in your application component (app's root component), subscribe to router event method and initialize some variable based on which you can show spinner on the page (using css you can place spinner at the center of the page)

 router.events.subscribe(
  (event: Event) => {
   if (event instanceof NavigationStart) {
     this.loading = true;
   } else if (
     event instanceof NavigationStart ||
     event instanceof NavigationEnd ||
     event instanceof NavigationCancel ||
     event instanceof NavigationError ) {
     this.loading = false;
   }
  });

Sep 28, 2018

Docker Bridge Network

Docker Bridge (user defined) network allows containers connected to the same bridge network to communicate with each other. This is single host networking, meaning it applies to containers running on the same Docker daemon host. Even if we create bridge network with same name on another host, we will have two distinct network and container on one host will not be able to talk to container on other host. For communication among containers running on different Docker daemon hosts, you can use an overlay network. 

When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified. Run following command to get all the networks 
docker network ls

You can create an user defined network with the following command and then create containers on that bridge network. The container will get IP from the subnet specified while creating bridge network
docker network create -d bridge --subnet 10.0.0.1/24 my-bridge
If you create any container on this bridge network, it will get IP from the subnet you specified above
docker run -dt --name c1 --network my-bridge alpine sleep 1d
docker run -dt --name c2 --network my-bridge alpine sleep 1d

In the above example I am creating two containers on my user defined network. Now run following command to inspect the network
docker network inspect my-bridge

Output of above command will show details of my-bridge, some of them are subnet ip, containers in the network and its ip.

Now since the two containers are running on the same user defined network, it will automatically exposes all ports to each other and no port to the outside work. This makes containerized applications to communicate with each other easily. So now try the following command, which will execute sh command on the container c1

docker exec -it c1 sh

Now from c1 you should be able to ping c2, try following
ping c2

So the user defined bridge provides automatic DNS resolution between the container, as in case above, we were able to ping c2 with its name without IP. Every docker engine has embedded DNS service, meaning anytime we create container with name flag, an entry for that container will get added to DNS server, then any other container on the same network can talk to it by its name. Every container gets a DNS resolver, not a full blown DNS server, just a small resolver that can trap and forward name based query. It listens on standard DNS port 53 at 127.0.0.11 on every container. The resolver intercepts all DNS requests from the container and it forwards them to a DNS server service running on the local Docker host. Then the DNS server on the Docker host either resolves the name or sends it off to the big wide world public DNS.

On the other hand if we want a container on bridge network to be accessible from outside that network (container from another host or network, client outside host ), you need to publish container service on a host port. In the following command we are publishing container's port 8091-8094 through host port 8091-8094

docker run -d --name couchbase -p 8091-8094:8091-8094 -p 11210:11210 couchbase