Dec 19, 2017

S3

S3 provides secure, durable, highly scalable object based storage. The data is stored across  multiple devices and facilities.
  • Files can be anywhere from 0byte to 5TB. 
  • Files are stored in bucket
  • You can access bucket with following url https://s3.amazonaws.com/<bucketname> , so the name of bucket should be universal
  • When you upload file to S3 bucket, you get 200 status code
  • Read after write consistency
  • Eventually consistency for overwrite for PUTS and DELETS. This is because object stored across multiple devices and facilities may take time to propagate. Though it may take few sec or millisec to propagate, but at any point data will be atomic meaning you will either get old data or new data.
  • S3 is object based storage, which means its suitable for objects like pdf, images etc. It is not for installing OS or DB. Each object consists of 
    • Key - Name of the object. You can add some random characters 
    • Value - The is data which made of sequence of bytes
    • Version ID
    • Metadata 
  • You can add access control at bucket level of object level.
  • By default buckets are private and all object stored inside them are private.
  • S3 bucket can be configured to create access log which log all request made to S3 bucket and this can be done to another bucket.
  • S3 bucket can be used to host static web site. Format of url is http://<bucketname>.s3-website-<region>.amazonaws.com

S3 Storage 

  • S3 - 99.99% availability, 11 9s durability, stored redundantly across multiple devices in multiple facilities and is designed to sustain loss of 2 facilities concurrently
  • S3 - IA - Here you charged retrieval fee. 99.99% availability, 11 9s durability, stored redundantly across multiple devices in multiple facilities and is designed to sustain loss of 2 facilities concurrently
  • Reduced Redundancy Storage - 99.99% availability, 99.99% durability, suitable for files which can be reproduced in case of loss. Concurrent fault tolerance 1 facility.
  • Glacier - Used for data archival, may need 3-5 hrs to retrieve data. 11 9s durability. It has retrieval fee. 

S3 Charges

  • Storage
  • Request
  • Storage management price - When you tag object, Amazon change based on per tag basis.
  • Data transfer fee - When you replicate data or migrate from one region to other
  • Transfer Acceleration - It takes advantage of amazon cloud front globally distributed edge location. Data transfer between edge location and S3 source over an optimized network path. You can get speed comparison here.

Access

  • Owner Access
  • Other AWS Account
  • Public access

Encryption

Data is transferred using SSL
  • ASE-256 - Server side encryption with Amazon S3-Managed Key (SSE-S3)
  • AWS-KMS - Server side encryption AWS KMS-Managed Key (SSE-KMS)
  • Server Side encryption with customer provided key SSE-C
  • Client side encryption

Versioning

  • Stores all version of objects
  • Once enabled versioning cannot be disabled only suspended
  • Versioning MFA delete capability uses multi factor authentication 

Cross Region Replication

  • Versioning must be enabled on both source and destination bucket
  • Files in the existing bucket are not replicated automatically. All subsequent replicated files will be replicated automatically.
  • You cannot replicated to multiple bucket.
  • You cannot replicate to the same region.
  • Delete marker are replicated but deleting individual version or delete marker are not replicated.

Life Cycle management

Life cycle rule will help you manage your storage costs by controlling the lifecycle of your object. You can create lifecycle rules to automatically transition your objects to standard IA, achieve them to Glacier storage class and remove them after a specified period of time. You can use life cycle rules to manage all versions of your object. 
  • Can be used in conjunction with version
  • Can be applied to current version or previous version
  • Transition to IA - min 128 kb and 30 days after creation
  • Archive to Glacier - 30 day after IA, or if doing directly from standard then 1 day after creation.
  • You can expire current version or permanently delete previous versions

Content Delivery Network

Cloud front is a global content delivery network service that securely delivers data to users with low latency and high transfer speed. Cloud front also works with non AWS origin server. 
  • Edge location - Content will be cached here. This is separate from region or az
  • Origin - S3 bucket, EC2 instance, ELB, Route 53
  • Distribution - Name given to CDN which consists of collection of Edge location.
    • Web Distribution - Typically used for website
    • RTMP - Used for media streaming
  • Edge location are not just for read, you can even write to Edge loction
  • Object are cached for life of TTL(time to live). Expiring before TTL is possible but cost extra.
  • You can have multiple origins (like S3 buckets etc) in a Cloud front distribution
  • You can have multiple behavior like path pattern to particular origin etc
  • Configure error page
  • Geo restriction setting, whitelist or black list countries
  • Invalidate which removes from edge location. Less expensive would be to use versioned object or directory name.

Storage Gateway

  • File Gateway (NFS)
  • Volume Gateway (iSCSI) - Data written to disk are asynchronously backed up as point in time snapshot and stored in cloud as EBS snapshot. Snapshot are incremental which are also compressed to minimize storage charge. 1gb - 16TB
    • Stored Volume
    • Cache Volume
  • Tape Gateway (VTL)

Transfer Acceleration

This utilizes cloud front edge network to accelerate your uploads to S3. When you enable transfer acceleration for a bucket, you get a distinct url (<bucketname>.s3-accelerate.amazonaws.com) to upload directly to edge location which will then transfer that file to S3 bucket.

Static Website Hosting

Dec 16, 2017

VPC

Amazon Virtual Private Cloud lets you provision a logical section of the AWS where you can launch AWS resources in a virtual network that you define. You have complete control over your VPC including selection of ip range (IPv4 CIDR block), creation of subnets, configuration of route table and network gateway. Its logically isolated from other virtual network in AWS cloud. 

When you create VPC it automatically creates following
  • Route table
    • It will create Main Route table in the VPC. You will not be able to delete Main route table until. This gets deleted automatically when you delete VPC
    • Main route table will have a local target route with destination of the VPC IPv4 CIDR and also IPv6 if you selected IPv6 CIDR block when you created VPC
    • Any subnet which you will create and not associate explicitly with any route table will automatically be associated to Main route table.
  • Network ACLs
    • A default Network ACL will be created which you cannot delete.
    • Default Network ACL will allow all inbound and outbound traffic. You have option to changing it to deny or modify and rules in it.
  • Security group
    • Default VPC security group will be created
    • By default it will allow all outbound traffic and allow no inbound traffic and allow instances associated with this SG to talk to each other.
    • You can also edit security group rule by adding, removing or updating

Using VPC peering you can connect one VPC with other via direct network route using private IP address. This can be done for other AWS account as well as other VPCs in the same account.

Subnets

A subnetwork or subnet is a logical subdivision of an IP network.[1] The practice of dividing a network into two or more networks is called subnetting.
  • When you create VPC you specify IPv4 CIDR block (and optional Amazon provided IPv6 CIDR block). You can create subnet in VPC with subset of VPC IPv4 CIDR block (and also for IPv6 if you choose to do so).
  • Based on subnet's IPv4 CIDR block, you will get IPv4 address in that subnet. Refer following to get count of available IP for specific CIDR block. One important thing to note here is that first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use, and cannot be assigned to an instance.
  • By default any resource created in this subnet will not get public IP address. If you want to change this behavior, you will have to enable auto assign public IPv4 address settings.
  • Subnet will be associated with Main Route table and Default Network ACLs. This can definitely be modified.

Route Table

A route table contains a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in your VPC is associated with ONLY ONE route table. If you don't explicitly associate you subnet to a route table then its associated to Main route table
  • Each route in a table specifies a destination CIDR and a target. For example destination  10.0.0.0/16 with target for Local, which means traffic destined for any ip within 10.0.0.0/16 is targeted for local. Similarly to open all internet access you can choose 0.0.0.0/0 (which essentially means any ip)  with target internet gateway.
  • When you add an Internet gateway, an egress-only Internet gateway, a virtual private gateway, a NAT device, a peering connection, or a VPC endpoint in your VPC, you must update the route table for any subnet that uses these gateways or connections.
  • For public subnet (instance to be served as web server) you need to have route with destination 0.0.0.0/0 with target as internet gateway.

Internet Gateway

An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IP (IPv4 and IPv6 traffic) addresses. One VPC can only have one Internet Gateway. 

NAT Instance

You can use a network address translation (NAT) instance in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet or other AWS services, but prevent the instances from receiving inbound traffic initiated by someone on the Internet. 

EC2 instance performs source and destination check which means instance must be source or destination of any traffic it sends or receives. However a NAT instance must be able to send or receive traffic when the source or destination is not itself. Therefor source and destination check must be disabled on NAT instance.

NAT Gateway

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. For IPv6 use an egress-only Internet gateway. 


NAT instance is instance (you create single or multiple) which you have to manage whereas NAT gateway is clustered instances which amazon manages so you don't have to worry about maintaining that. NAT instance sits behind security group where as NAT gateway is outside security group. Both need to be in public subnet  which allows internet traffic and need to be added to the route table which is associated to the private subnet. This way you can connect to internet in the resources which are within private subnet. The downside of NAT instance is that all your traffic in private subnet goes through NAT instance, so that's a bottleneck as if its goes down it will impact all the resources within your private subnet. NAT instance can be used to bastion server  (meaning it can be used to RDP or SSH servers in private subnet.) where as NAT gateway cannot be. NAT Gateway automatically assign ip address when you create them and amazon manages them. You should have NAT gateway in multiple AZ. You cannot SSH or RDP into nate gateway.

Network ACL

A network access control list (ACL) is a  layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets inside VPC. 

  • By default everything is denied when you create NACL.
  • Each subnet must be associated with NACL, if you don't explicitly associate subnet with NACL it automatically associate it with default VPC.
  • You can associate NACL with multiple subnet, but a subnet can be associated with single NACL and when you update NACL to vpc, it will remove previous associated NACL.
  • NACL can be used across multiple AZ where as subnet is in single AZ
  • ACL contains numbered list of rules that is evaluated in order, starting with lowest numbered rule.
  • Network ACLs are state less, response to allowed inbound traffic are subject to rules for outbound traffic and vice versa, meaning you need to specify both inbound and outbound rules explicitly. Security Group which acts a firewall for controlling traffic in and out of EC2 instance are statefull.
  • Security Group you allow but in NACL you can allow or deny

Here are some of the examples of minimum Network ACL rule in order to allow specific operation from subnet.

To Allow ping

  • Inbound - All ICMP - IPv4 Allow, All Trafic Deny
  • Outbound - All ICMP - IPv4 Allow, All Trafic Deny

To Allow SSH

  • Inbound - SSH (22) Allow, All Trafic Deny
  • Outbound - Custom TCP Rule(1024-65535) (Ephemeral_port) Allow, All Trafic Deny

To Allow SSH from Public subnet to private subnet

Since you cannot directly connect to instance in private subnet, you can create Bastions instance, which can act as jump boxes which you can use to administer (like SSH or RDP) to instances in private subnet
  • Public Subnet NACL
    • Inbound - SSH (22) Allow, Custom TCP Rule(1024-65535) (Ephemeral_port) Allow, All Trafic Deny
    • Outbound - Custom TCP Rule(1024-65535) (Ephemeral_port) Allow, SSH (22) Allow, All Trafic Deny
  • Private Subnet NACL
    • Inbound  - SSH (22) Allow, All Trafic Deny
    • Outbound - Custom TCP Rule(1024-65535) (Ephemeral_port) Allow, All Trafic Deny

Allow HTTP Access from subnet

  • Inbound - Custom TCP Rule(1024-65535) (Ephemeral_port), All Trafic Deny
  • Outbound - HTTP(80) Allow (or HTTPS(443) for ex running aws s3 ls), All Trafic Deny

Allow HTTP Access to Subnet (instance acting as web server)

  • Inbound- HTTP(80) Allow, All Trafic Deny
  • Outbound - Custom TCP Rule(1024-65535) (Ephemeral_port), All Trafic Deny

VPC Flow Log

Its a feature which enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon cloud watch Logs. It can be created at 3 level

  • VPC
  • Subnet
  • Network interface level

To set up flow log
  • you have to define filter (all, accepted, rejected)
  • role which can perform logs:Create\DescribeLogGroup\Stream, PutLogEvents
  • Assign Log group

  • You cannot enable flow logs for VPC that are peered with your VPC unless the peer VPC is in your account
  • You cannot tag a flow log
  • After you have created a flow log, you cannot change its configuration, for ex you cannot associate different IAM role with the flow log
Following traffic are not monitored
  • Traffic generated by instances when they contact Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged.
  • Traffic generated by windows instance for Amazon windows license activation.
  • Traffic to and fro 169.254.169.254 for instance meta data
  • DHCP traffic
  • Traffic to reserved IP address for default VPC router.

Dec 12, 2017

EC2

EC2 is web service which provide resizable compute capacity in cloud in minutes, allowing you to quickly scale capacity, both up and down, as your compute requirement change.

EC2 Options

  • OnDemand - Allow you to pay by hr (or by second). No upfront payment or commitment. Application with sort term spike or unpredictable work loads that cannot be interrupted, app being developed for first time
  • Reserve - You can reserve for 1-3 yr. Price is less than OnDemand. Steady state or predictable usage. Its for a region which cannot be changed but you can change AZ
    • Standard - Price 75% off on demand
    • Convertable RI - Price 54% off on demand. You have flexibility of changing some of the attribute of EC2 instance like general purpose to cpu optimized or windows to Linux
    • Schedule RI
  • Spot - If you have flexible start and end time. If your bid price is higher than spot price EC2 instance will be provisioned. If spot price goes higher than bid, the the instance will be terminated. Some data processing which can happen 3am in the morning. If you terminate you pay for full price, if AWS terminate because spot price went above bid price you will get hour when it was terminated for free
  • Dedicated Host - If you don't want multi tenant scenario, like for regulatory requirement, or for licensing which does not support multi tenancy or cloud deployment, can be purchased on demand or Reserved

EC2 Instance Types

  • D2 Dense storage used for fileservers, data warehousing, Hadoop
  • R4 Memory optimized for memory intensive app
  • M4 General purpose app server
  • C4 Compute optimized, cpu intensive app/dbs
  • G2 Graphic intensive, video encoding, 3d app streaming
  • I2 High speed storage, no sql db, data warehousing
  • F1 Field programmable gate array, hardware acceleration for your code, change underlying hardware to choose your need
  • T2 Lowest cost general purpose, web server / small db
  • P2 Graphic general purpose GPU, m/c learning 
  • X1 Memory optimized for SAP HANA/apache spark, extreme memory

Launching EC2

  • While launching EC2 instance you will be asked to use public (AWS stores) and private key (you stores) pare. You need private key to obtain password for window RDP and for linux you can use that to SSH into your instance. You can use same public key/private key combination for multiple EC2 instances.
  • For each ec2 instance you get ipv4 (or ipv6) public (and also private for internal use), ip address and DNS which you can use to RDP or SSH.
  • Termination Protection Will not allow you to terminate instance until you change instance setting
  • System status check - It just make sure instance is reachable. If this fail there may be issue with infrastructure hosting your instance. You can restart or replace the instance.
  • Instance Status check - This verifies if instance OS is accepting traffic. If this fail you can restart or change OS configuration.
  • Security group is a virtual fire wall where you specify what incoming/outgoing is allowed. By default everything is blocked, you need to whitelist what you want to allow. 

Elastic Block Store

This allows to create storage volumes and attach them to EC2 instance. You can consider this as disk which is attached to your VM. This is block base storage where you can deploy OS, file system, db where as S2 is object storage which is not suitable for installing OS, db etc. This is placed in specific AZ and is automatically replicated within AZ which protect it from failure of a single component. This cannot be mount to multiple EC2 instances. All EBS mounted on EC2 instance will be in the same AZ.  
  • General Purpose SSD, 3IOPS per gb, with upto 10,000 IOPS
  • Provisioned IOPS - Designed for I/O intensive app like large relational or No SQL db, use if needed more than 10,000IOPS, it can go upto 20,000IOPS
  • Magnetic Storage physical spinning disk
    • Throughput optimized HHD(ST1), Big data, Data warehousing, Log processing, can't be boot volume, frequently accessed sequential data
    • Cold HDD (SC1)- Lowest cost storage for infrequently accessed workloads, file server, can't be boot volume
    • Magnetic Standard - Lowest cost per gb and is bootable. Suitable where data access infrequently

RAID

Redundant array of independent disks. You put multiple disk together and that act as single disk to the OS. This is needed when you need more IO than single volume type provide. For ex you have db which is not supported by AWS and you not getting enough IO with default EBS type. In windows you can do this by RDPing into the instance and going to Disk management. Taking Snapshot while instance is running can excluded data held in cache by application and OS. This tend to not matter for single volume however for multiple volume for RAID this can be a problem. This can be solved by freezing the file system, or unmounting RAID array or shut down EC2 instance which is the easiest way. 
  • RAID 0 - Stripped, no redundancy, good performance. If one disk fail you loose everything
  • RAID 1 - Mirrored, redundancy
  • RAID 5 Good for read bad for write , AWS does not recommend this.
  • RAID 10 Stripped and Mirrored, its combination of RAID1 and RAID 0

Volume

  • You can modify volume like type (standard to iops but not from Magnetic Standard), size
  • You can create snapshot. While doing this you cannot change encryption type.
  • You can detach volume from EC2 instance after which you can delete it or attach it to other EC2 instance.
  • When termination instance root volume will be terminate by default but other EBS volume attached to instance will not be deleted. By default deleting an instance will delete volume until you uncheck delete on termination while provisioning EC2 instance.
  • Root volume of public AMI cannot be encrypted because encryption key is held within your AWS a/c.
  • Additional volume on EC2 instance can be encrypted while creating EC2 instance from public AMI.
  • You can also use third party tool such as bit locker for windows to encrypt root volumn. 

Snapshot

  • You can create volume and update volume type size, availability zone. You cannot encrypt EBS.
  • You can create AMI, while doing that you can add extra volume, but you cannot encrypt EBS.
  • By default snapshot are private, but you can change permission to make it public or share it with other AWS account, which can give permission to copy snapshot and create volume from it.
  • You can copy snapshot to other region or to the same region and you also have option on encrypt the snapshot.
  • Snapshot of encrypted volume are automatically encrypted. Volumes (event root) restored from encrypted snapshot are encrypted. You can share snapshot but only if it is not encrypted, because encryption key is associated with your account.
  • Snapshot exist on S3, you will not be able to see that in a bucket. Its a point in time copy of the volume, and are incremental.
  • First snapshot may take longer. It is advisable to stop instance before taking snapshot however you can take snapshot even when instance is running.
  • Snapshot has createVolumePermission attribute that you can set to one or more AWS account ID to share it.

AMI

  • AMI can be created from snapshot or EC2 instance.
  • You can copy AMI to other region or to the same region and you also have option to encrypt target EBS snapshot.
  • You can launch EC2 instance from AMI
  • You can create spot request from AMI
  • You can delete AMI by Deregistering it.

EBS Vs Instance Store

Some Amazon EC2 instance types come with a form of directly attached, block-device storage known as the instance store. Instance store volume are sometime called Ephemeral storage. Instance store volume cannot be stopped, if underlying host fails, you will loose the data, where as EBS backed instance can be stopped. You will not lose the data on this instance if it is stopped. You can reboot both and you will not lose data. By default both root volume will be deleted on termination, however with EBS volume, you can keep AWS to keep the root device volume. Instance store are less durable and are created from template stored in s3 where as EBS volume is created from snapshot. Instance store cannot be added after EC2 instance is created.

Load Balancer

Virtual app which will spread traffic across your different web server
  1. Classic Load balancer - The AWS Classic Load Balancer (CLB) operates at Layer 4 of the OSI model. What this means is that the load balancer routes traffic between clients and backend servers based on IP address and TCP port. For example, an ELB at a given IP address receives a request from a client on TCP port 80 (HTTP). It will then route that request based on the rules previously configured when setting up the load balancer to a specified port on one of a pool of backend servers. In classic lb you register instances with lb.
  2. Application Load balancer - It operates at layer 7 which means not only you route traffic based on IP address and TCP port, but you can add more configuration based on path etc. In application lb you register instance as targets in a target group.
  3. Network Load Balancer
To Create load balancer you configure following
  • Load balancer protocol(port), Instance Protocol(port)
  • Security Group
  • Health check on EC2 instance (Response timeout, Interval, unhealthy threshold, healthy )
  • Elastic Load balancer will have public ip address but amazon manages it and you will never get IP as it changes internally.  Here you get public dns
  • Instance monitored by ELB is either in-service or out service
  • You can have only one subnet from each AZ and you should have alteast two AZ in your lb and all of your subnet should have internet gateway if you creating internet facing lb.
ELB Connection Draining causes the load balancer to stop sending new request to the back end instances when the instances are getting deregistered or become unhealthy, while ensure that inflight requests continues to be served. User can specify max of 1hr (default 300 sec) for the load balancer to keep connection alive before reporting the instance as deregistered.

ELB Session Sticky/Affinity feature enables LB to bind user session to a specific instance. It uses your app session cookie or you can configure ELB to create session cookie (). 

Health Check

  • CPU Credit Usage, CPU SurplusCreditBalance, CPU SurplusCreditsCharged, CPUCreditBalance, CPUUtilization
  • DiskReadBytes, DiskReadOps, DiskWriteBytes, DiskWriteOps
  • NetworkIn, NetworkOut, NetworkPacketsIn, NetworkPacketsOut
  • StatusCheckFailed, StatusCheckFailed_Instance, StatusCheckFailed_System
  • For custome like RAM utilization etc you need to write code 

Cloud Watch

Here you can create dashboard, alarm, event (based on any event it can trigger some other activity), Log (here you can go at app layer and log any event). Standard monitoring is 5 min and for detail (you pay extra) is 1 min. Cloud watch is for monitoring and cloud trail is for auditing.

Cloud watch can manage resources such as EC2 instances, DynamoDB table, RDS DB instances, custom metrics generated by your applications and services and any log files your app generate. You can use cloud watch to gain system wide visibility into resource utilization, app performance, and operation health. You can keep these insights to reach and keep your app running smoothly. 

Bootstrap Script

While creating EC2 instance you can specify bootstrap script. Refer following for an example on Linux m/c

 #!/bin/bash
 sudo su   #elevate privilege to root
 yum install httpd -y
 yum update -y
 aws s3 cp s3://rraj-test-bucket /var/www/html/ --recursive
 currentDate=`date`
 echo $HOSTNAME ": was created on - "  $currentDate > /var/www/html/index.html
 curl http://www.google.com
 service httpd start
 chkconfig httpd on

Placement Group

It is a logical grouping of instances within single availability zone. Using placement groups enables app to participate in low latency, 10gbps network. Its recommended for app which benefits for both low network latency and high network throughput or both. It cannot span multiple availability zone. Name of placement group should be unique in your aws a/c. Only certain type of instance can be launched in placement group (computer optimized, GPU, Memory Optimized, Storage Optimized). AWS recommend homogenous instances (instance with same size and same family) within placement group. you can't merge placement group. you can't move existing instance into placement group.

EFS

  • Supports network file system version 4 protocol
  • Only pay for storage you use.
  • It can support thousand of concurrent NFS connections
  • Data is stored across multiple AZ
  • EFS is block base storage
  • Read after write consistency
  • Can scal upto petabyte
  • It can connect to multiple EC2 instances

IAM Role

In order to access aws services, you need to configure credential by running aws configure and entering aws Access Key ID, Secret key. Doing this stores these info in .aws folder and anyone who is able to ssh will be able to access key and secret. In order to avoid this you can specify IAM role while creating EC2 instance. You need to make sure you add necessary policies to this role.

AWS Command Line

 aws s3 ls
 aws ec2 describe-instances
 aws ec2 help
 on putty hit q to escape if its showing more and you don't want to scroll further
 create a user and give s3 admin access. when you run aws configure, use this users secret key and access key which will be stored in .aws folder, so if your ec2 instance is compermised, then someone can gain access to the key. This can be prevented by creating a role for EC2 servrice (as EC2 service will use this role), assign this role policy AmazonS3FullAccess. Now when you create a new EC2 instance assign this role as IAM role or for existing instance click on attach/replace IAM role
Instance Metadata - You can access this from command line from following curl command
curl http://169.254.169.254/latest/meta-data/public-ipv4
curl http://169.254.169.254/latest/meta-data/public-ipv4 > mypublicip.html

Launch Configuration and Auto Scaling

  • You can increase/decrease group size based on alarm which you set.
  • Alarm can be set based on average/min/max/sum/samplecount of cpu utilization/disk read/write/network in/Out

Oct 9, 2017

Measuring Performance of Algorithm

Asymptotic performance of an algorithm refers to defining expression or curve that describes execution time of an algorithm. It only consider significant part of the expression, for example f(n), g(n2).  If I have an algorithm which has 4 + n(n + 5) instructions then we consider g(n2) as asymptotic performance of the algorithm, we simply consider highest order in the expression.

Refer following for example of f(n)

  var printListItem = function(list){
    for (var i = 0; i < list.length; i++) {
      console.log(list[i]);
  }


Refer following for example of f(n2)

var combination = function(list1, list2){
    list1.forEach((item1)=>{
      list2.forEach((item2)=>{
        console.log(item1  + ' ' + item2)
      });
    });
  }

Refer following for example of f(log(n))

  var binarySearchRecursiveStyle = function(sortedList, item, start, end){
    var length = 1+end-start;
    var mid = length/2;
    
    if (sortedList[mid]) {
      return true;
    }
    
    else if (sortedList[mid] > item) {
      if (startIndex === mid-1) {
       return sortedList[mid-1] === item
      }
      
      return binarySearch(sortedList, item, startIndex, mid-1);
    }
    else{
      return binarySearch(sortedList, item, mid, endIndex)
    }
   
    return false;
  }

Refer following for curve representing f(n2), f(n), f(log(n))






Big Theta
This represents actual performance of an algorithm, refer above for f(n), f(n2) etc

Big O
This represent performance in the worst case. Take example of the following code. It may be possible that we have to loop to the end of the list and not find the item, in which case we can represent performance as f(n). This is what is referred as Big O
    

  var contains = function(list, item){
    for (var i = 0; i < list.length; i++) {
      if (list[i]==item) {
        return true;
      }
     }
    return false;
  }


Big Omega
This represents performance in best case. In the same example above the best case could be the situation in which we find the item at the first place in which case the performance will be f(1). This is referred as Big Omega.

Amortized complexity
There are some algorithm where you have to do some house keeping at certain intervals. For example List in c#, here you don't define the size, it grows dynamically. Once it reaches array limit it creates a new array with double the size of original array and copy each item from the original array to new array.

List -
Add - O(1), This has amortized complexity of resizing (and copying.)
Remove - O(n) This may need to replace all, if first element is being removed.
Go To Index - O(1)
Find - O(n)

Linked List
Add/Remove/Merge O(1)
Find - O(n)
Go To Index - O(n)

Dictionary
When we store an object in a dictionary, it’ll call the GetHashCode method on the key of the object to calculate the hash. The hash is then adjusted to the size of the array to calculate the index into the array to store the object. Later, when we lookup an object by its key, GetHashCode method is used again to calculate the hash and the index. Key should be unique
Add/Remove/Find O(1)

HashSet
It represent set of values. Here every value should be unique which is determined by the value returned from GetHashCode .
Add/Remove/Contains O(1)

Stack
Last In First Out
Pop O(1)
Contains O(n)
Push O(1) This has amortized complexity of resizing (and copying.)

Queue
First In First Out
Pop O(1)
Contains O(n)
Push O(1) This has amortized complexity of resizing (and copying.)

Aug 7, 2017

Angular 2 Adding http service

Angular provides @angular/http npm package to perform http request. HttpClient is available as injectable class with all the methods to perform http request. This is from HttpClientModule.

Once you have module imported you should be able to user Http class for performing  http request. In theory you can inject this (http: HttpClient ) in your component class, but to have better separation of concern you should create injectable service and let that handle http call.

As you can see from the http api documentation , it returns Observable<Response>. You should be able to use all Observable Instance Methods on this. Refer this for list of methods which can be used on  Observable. Some of the common filter, map, catch, forEach, groupBy, merge, retry, toPromose.

Since this is based on observable, the actual http call will not happen until it has any subscriber.

Refer following for a simple service which call an http endpoint.
  
import { Observable } from 'rxjs';
import { retry } from 'rxjs/operators';
import { HttpClient } from '@angular/common/http';

@Injectable()
export class MyService {
  constructor(private http: HttpClient) { }
  public getData(): Observable {
    return this.http.get('someurl').map((data) => { return data.json() });
  }
}

Apr 18, 2017

js object property descriptor

javascript properties has descriptors called property descriptors. You can access property descriptors like this
  
 var person ={
  name: {lastName:'xyz', firstName:'abc'},
  age: 15
 }
 console.log(Object.getOwnPropertyDescriptor(person,'name'));

This should return
  
 {
  value: object, //this could be value for primitives type property
  writable: true, 
  enumerable: true, 
  configurable: true
 }

You can update these descriptors by using Object.defineProperty
   
 Object.defineProperty(person,'age',{value:5});

Updating writable property descriptor to false will make this property read only
   
 Object.defineProperty(person,'age',{writable:false});

After doing this you won't be able to update property value
  
 person.age = 11; // This will throw error as it cannot assign to read only age property

In the following case we are making name property to be read only, but you can still update any of the property of name object as you not changing name property reference
  
 Object.defineProperty(person,'name',{writable:false});
 person.name.lastName = 'xyza'

If you want to prevent this to happen you can use
  
 Object.freeze(person.name);

After this you wont be able to update lastName
  
 person.name.lastName = 'xyzab' // This will throw error as it cannot assign to read only age property

Updating configurable to false will prevent from redefining the property. You will not be able to change enumerable/configurable property descriptor. You will also not be able to delete the property.
  
 Object.defineProperty(person,'age',{configurable:false}); 

Following will throw error
  
 Object.defineProperty(person,'name',{enumerable:false}); //cannot redefine property age
 Object.defineProperty(person,'name',{configurable:false}); //cannot redefine property age
 delete person.age //cannot delete property age

You will still be able to change writable descriptor
  
 Object.defineProperty(person,'age',{writable:false}); 

Updating enumerable to false will prevent this property to be enumerated
    
 Object.defineProperty(person,'age',{enumerable:false}); 

After executing above line you wont be able to see age property in any of the below code
   
 for (var propertyName in person){
    console.log(propertyName + ": " + person[propertyName])
 }
 console.log(Object.keys(person))

  console.log(JSON.stringify(person))  

Apr 15, 2017

javascript prototype

A function's prototype is the object instance that will become prototype for all objects created using this function as a constructor. For example when you define following function a prototype property is created on Foo, which you should be able to access Foo.prototype and it will be initialized object (like {}) with no property.
 
 function Foo(){
   console.log("js will create prototype property on Foo which will be initialized as empty object {} ");
 }

You can add property to the prototype object of Foo Foo.prototype.myFirstName = "Sam" Now when you create object using constructor function then constructor function prototype will become object prototype
 
 var foo = new Foo();
 var bar = new Foo();
 Foo.prototype.myLastName = "Adam"

So following should return true
 
 console.log(bar.__proto__=== Foo.prototype);
 console.log(foo.__proto__=== Foo.prototype);

You can add more properties to Foo prototype
 
 Foo.prototype.age = 5

Now lets say you update functions prototype like this. Here you changing pointer of Foo.prototype where as in the above case pointer was same, you just added new property to that object.
 
 Foo.prototype = {myName: "Tom"}

Doing so will change return value of following to false as bar and foo prototype is still pointing to prototype object which was there on Foo when foo and bar was created.
 
 console.log(bar.__proto__=== Foo.prototype);
 console.log(foo.__proto__=== Foo.prototype);

So now when you create new Foo object this will get new prototype
 
 var baz = new Foo();
 console.log(baz.__proto__=== Foo.prototype);

There is a chain of prototype. Following will return true
 
 console.log(bar.__proto__.__proto__=== Object.prototype);
 console.log(bar.__proto__.__proto__.__proto__=== null);

When you try to access foo.myFirstName, javascript looks first if myFirstName is property of foo or not, if not then it looks in the prototypal chain and return it. So in the above case foo.hasOwnProperty('age') will return false where as foo.__proto__.hasOwnProperty('age') will return true. Also you can check by looking at all the keys Object.keys(foo

). When you create an object with new keyword following happens
  
 function Foo(name){
  var someInteger = 1;
  this.name = name;
  return 5;
 }
 var foo = new Foo('John');

A function without a return statement will return a default value. In the case of a constructor called with the new keyword, the default value is the value of its "this" parameter. For all other functions, the default return value is undefined. So in the above case foo will be assigned like this
 
  Foo {
   name: 'john'
  }

foo object will not see someInteger and its return value Function prototype (Foo.prototype) object is passed to the __proto__ property of the object so this will return true
 
  console.log(foo.__proto__=== Foo.prototype); //will return true

When you create an object by Object.create() only prototype is set
 
 var foo = Object.create(Foo.prototype);
 console.log(foo.__proto__=== Foo.prototype); //will return true
 console.log(typeof foo.name1 === 'undefined'); //will return true
 console.log((foo.name1 === undefined ); //will return true

When you create an object by object literal {} then Object prototype is set prototype of the object
  
 var foo = {firstName: 'John', lastName: 'singh'};
 console.log(foo.__proto__ ===Object.prototype); //will return true

Apr 11, 2017

Angular - Observables

Observables is an ES7 feature which means you need to make use of an external library to use it today. RxJS is a good one. This is not angular 2 specific feature, though angular cli does add this to the dependencies. Observables gives you all the features of promises and more. A Promise handles a single event when an async operation completes or fails, whereas Observable is like a Stream and allows to pass zero or more events where the callback is called for each event. For example FormControl valueChanges returns Observable, so I can write a code like this which will write to console every time value is changed.
   
   let sub =  this.name.valueChanges.pipe(debounceTime(1000))subscribe(
      (newValue: string) => {
       console.log(newValue);
       if (newValue.length === 5) {
        sub.unsubscribe();
       }
      });

In the following example we are making an http call and I am expecting number, then I apply pipe, which gives me option to transform data before passing to the subscriber. This will be helpful if I would like to transform data before sending to subscriber or may be convert error to standard format. Keep in mind observable are deferred execution, so unless it has a subscriber it (along with operators in pipes) will not be executed
   import { Observable, BehaviorSubject, throwError, combineLatest, forkJoin } from 'rxjs';
   import { retry, catchError, tap, map, filter, finalize,  delay,debounceTime } from 'rxjs/operators';
   import { HttpClient } from '@angular/common/http'; 

   this.http.get(url).pipe(
        retry(3),
        map(obj=>{return obj *2;}),
        tap(obj => {console.log(obj)}),
        catchError((err)=>{return throwError(err);})
      ).subscribe(
        (value)=>{console.log(value)},
        (err)=>{console.log(err)},
        ()=>{console.log('done')}
      );

combineLatest
once all input observables have produced at least one value it returns Observable and after that it returns Observable everytime it produces a new value.
forkJoin
It require all the observables to be completed and then returns single value that is an array of the last values produced by the input observables

Observable also has the advantage over Promise to be cancelable. One of the example will be for type ahead, if user has changed the text which will result in new http call, we can cancel subscription to the previous one. In case of promise, the callback call will happen either in success or failure scenario. Promises doesn't have option of Lazy loading where as observable will not be executed until someone subscribe to it. You also have option on retry and retryWhen in observable.
If you subscribe to an observable or event in JavaScript, you should unsubscribe at a certain point to release memory in the system, otherwise it will lead to memory leak. Here are few of the case where you should explicitly unsubscribe
1. Form value change as shown in the example above
2. Router to be on safe side, though angular claim to clean it up.
3. Infinite observable
 Observable.interval(1000).subscribe(console.log(''))
 Observable.fromEvent(this.element.nativeElement, 'click').subscribe(console.log('hi'));
For the following case you don't need to unsubscribe
1. aysnc pipe - When the component gets destroyed, the async pipe unsubscribes automatically.
2. Finite Observable - When you have a finite sequence, usually you don’t need to unsubscribe, for example when using the HTTP service or the timer observable

Apr 10, 2017

Angular 2 - Forms & Validation

Template Based form 
For this you need to import FormsModule. As soon as you import FormsModule, ngForm directive becomes active on all <form> tags. You can export it into a local template variable (#myForm="ngForm"). This will give access to aggregate form value (myForm.value), child controls (myForm.controls['control's name attribute']) and validity status (myForm.controls['formElement'].valid or myForm.valid) as well as user interaction properties like dirty (myForm.controls['formElement'].dirty), touched(myForm.controls['formElement'].touched) etc.
Angular provides ngSubmit ((ngSubmit)="saveMe(myForm.value)") directive which prevents form to be submitted to server
For binding you can use ngModel. If you need two way binding then use [()]. () Html to component direction, [] component to html
[(ngModel)]="somePropertyDefinedInComponent" ngModel require name attribute
In theory you should be able to use input event and then assign any of the component's property to the value of the input element. Most likely you will not do this as angular provides shot cut methods as described earlier.
(input)="somePropertyDefinedInComponent=$event.target.value"
ngModelGroup - If you want to group certain properties nested within property then you use ngModelGroup="property under which all elements will be nested"
Model based form or reactive form
In template based approach all the logic resides in html, so complex scenarios (like cross field validation etc.) may not be easy to achieve and also you cannot unit test your code. So for this case you can use reactive form for which you need to import ReactiveFormsModule. In your component you create FormGroup and add all form controls to it

   myFormGroup = new FormGroup({
      name: new FormControl('', [Validators.required, Validators.pattern('[a-zA-Z].*')]),
      address: new FormArray([
        new FormGroup({
          city: new FormControl(),
          state: new FormControl(),
        })
      ]),
      cityVisited: new FormArray([
        new FormControl()
      ])
    })

In the template, bind form formGroup attribute to the FormGroup object created in the component and input elements you need to bind it the FormControl property of component

     <form [formGroup]="myFormGroup" 
     formControlName="name" or [formControl]="myFormGroup.controls.name"
     <div *ngFor="let item of myFormGroup.controls.address.controls; let i=index">
         <input  [formControl]="item.controls.state" />
     </div>
     <div *ngFor="let item of myFormGroup.controls.cityVisited.controls; let i=index">
         <input  [formControl]="item" />
     </div>


FormGroup tracks the value and validity state of a group of FormControl instances.

Custom Validation
Create a function which takes FormControl as a parameter and returns object (property usually with the validator name and value as any value which you may want to show). If the function returns null then its valid otherwise its invalid. You can also create directive  and implements Validator. You can call updateValueAndValidity() on specific controller to trigger its validation.

Apr 6, 2017

Angular 2 Directive

Directive change appearance or behavior of the element. Component is an element and directive is attribute. It is defined similar as component, in place of @Component you use @Directive. Also notice selector is wrapped with [] which indicates its an attribute.

@Directive({
selector: "[my-directive]",
})

Within directive you can get reference of the element like this

private el: HTMLElement;
        @Input("modal-trigger") modalId: string; //you need to wrap modal-trigger in "" as it has - which is not allowed in variable name

        constructor(ref: ElementRef) {
               this.el = ref.nativeElement;
        }

Now finally you can add this directive to any of the element
<some-element my-directive="some-data-to-be-passed">

Angular 2 Routes

Setting up routing requires that we define a base path, import the Angular router, configure the routes to define which route path activates which component, identify where to place the activated component's template, and activate routes to navigate based on user actions
 
The RouterModule provides router service to manage navigation and URL manipulation, configuration for configuring routes, and directives for activating and displaying routes. Router service deals with the globally shared resource, the URL location, there can only be one active router service. To ensure that there is always only one active router service, even when importing RouterModule multiple times, RouterModule provides two methods- forRoot and forChild. RouterModule.forRoot declares the router directives, manages our route configuration, and registers the router service. We use it only once in an application and for feature route use forChild.
 

Order In which Route Path is Evaluated

The router will pick the first route with a path that matches the URL segments. It merges the application routes explicitly defined in the application module with the routes from the all feature imported module. The routes which are explicitly configured in a module are processed last after any imported modules.
 
AppModule
 imports: [
  ...
  RouterModule.forRoot([
   { path: '', redirectTo: '/home', pathMatch: 'full' },
   { path: 'home', component: HomepageComponent },
   { path: '**', component: PageNotFoundComponent }
  ]),
  UseraccountModule,
..
 ]
 
UseraccountModule
 imports: [
  CommonModule,
  RouterModule.forChild([
   { path: 'signin', component: SigninComponent },
   { path: 'signup', component: SignupComponent }
  ]),
  ...
 ],

In the above case route is evaluated like following, notice it will first evaluate routes from imported UseraccountModule and then route from AppModule.
 { path: 'signin', component: SigninComponent },
 { path: 'signup', component: SignupComponent }
 { path: '', redirectTo: '/home', pathMatch: 'full' },
 { path: 'home', component: HomepageComponent },
 { path: '**', component: PageNotFoundComponent }
 

Directives

Router outlet Directive - Directive from the router library that is used like a component. It acts as a placeholder that marks the spot in the template where the router should display the components for that outlet.
RouterLink - Directive to navigate between routes. This will not load the entire page rather route defined within router outlet directive. This is different than href as href will load entire page and will make server call. Within the component you can use Router service to navigate between routes.
RouterLinkActive - Directive let you add CSS class to an element when the links route become active.

Angular2 Service Dependency Injection

A provider provides the concrete, runtime version of a dependency value.
{ provide: Logger, useClass: Logger} is same as Logger. This tells the injector to return instance of Logger when someone ask for Logger
{ provide: Logger, useClass: BetterLogger} - This tells the injector to return instance of BetterLogger when someone ask for Logger
Now you can inject logger by using a Logger type which is dependency injection token. In the following example an instance of Logger (or BetterLogger) will be injected via private property logger. Within the class you should get proper intellisense and type safety based on type Logger
constructor(private logger: Logger){}

Its better to always use decorator @Injectable() to a service class, even though its only mandatory if your service has dependency (inject other service as dependency of its own) on some other service. Decorators simply add meta to our code when transpiled.

MyService = __decorate([
        Object(_angular_core__WEBPACK_IMPORTED_MODULE_0__["Injectable"])(),
        __metadata("design:paramtypes", [_angular_common_http__WEBPACK_IMPORTED_MODULE_2__["HttpClient"]])
    ], MyService );
Here The paramtypes metadata is the one that is needed by Angular’s DI to figure out, for what type it has to return an instance.
TypeScript generates metadata when the emitDecoratorMetadata option is set and decorator is attached to the class, method etc

OpaqueToken
These are used to create non class dependency. For example for jQuery or any other third party library which do not have typescript class you can define OpaqueToken

import {OpaqueToken} from "@angular/core";
export let JQ_TOKEN = new OpaqueToken("my app jquery");

Similarly for DI on interface you can define OpaqueToken
export interface AppConfig {
 apiEndpoint: string;
 title: string;
}

export const HERO_DI_CONFIG: AppConfig = {
 apiEndpoint: 'api.heroes.com',
 title: 'Dependency Injection'
};

import { OpaqueToken } from '@angular/core';

export let APP_CONFIG = new OpaqueToken('my app config');

Now in the providers you can use it like this


{ provide: JQ_TOKEN, useValue: myAppjQuery }


Here you need to have myAppjQuery defined (let myAppjQuery: Object = window["$"];). Where ever you use JQ_TOKEN you will get handle of myAppjQuery, which at runtime will initialize as window["$"], so as long as jQuery is loaded in the rootwindow, you should be able to user it.
{ provide: APP_CONFIG, useValue: HERO_DI_CONFIG }

The above example is for interface which is not typescript class, so we defined Opaque token for that. Where ever this token will be used it will refer to value HERO_DI_CONFIG.

Now in your application, you should be able to inject this like this
constructor(@Inject(JQ_TOKEN) private $: any){}
constructor(@Inject(APP_CONFIG) config: AppConfig) {}

Mar 31, 2017

Angular 2 Components

Component consisted of two part one is component class which is like any other TS class and the other is meta data (@Component). Here you define selector, template , style etc.

Communicating with child component can be done by creating input property in child component and from parent component html pass this as attribute with square bracket

Communication with parent component

  • Template variable - Using template variable you can reference any child component output property within the template.
  • EventEmitter - Using output property of type EventEmitter, you can emit any event which you can bind in the parent component using parenthesis. In the event you can pass any data from the child component.


CSS is encapsulated within component, meaning css used in parent component will not affect child component and visa versa. You can still define global css which will be applied to all the components. 

Binding

Interpolation is used when you need to display the data {{}}. The text between braces is evaluated and then converted as string to display data. You can use javascript expression here, which do not have or promote side effects, for example  binding cannot contain assignment; {{ title='test' }} is not allowed. It can also invoke method of host component. Though inside method you can do some assignment, for which angular will not throw error but you will get unexpected result and this should be strongly avoided.

Refer following example, here we are assigning title property of host component to the value attribute of the input DOM element. This will be one way, meaning value property of component will be evaluated every time change detection cycle is triggered and string conversion of that will be assigned to the value attribute of the element
<input type="text" value="{{title}}">

Similarly in the following case div element will display title property of the component
<div>{{title}}</div>

Property binding [] is used when you want to bind data to property of dom element. For example in the following case value property of input DOM element is assigned with title property of host component. This will again be one way meaning value property will be eveluated eveytime change detection cycle is triggered
<input type="text" [value]="title" >

Event binding are used to bind event of the element. The expression used in this can have side effect. In the following example I am calling handleClick function on the host component on click event.
<input type="button" (click)="handleClick()">

I can also do assignment here as in following case in case of input event I am assigning value of target element to the title property of host component.
<input type="text" (input)="title=$event.target.value">


Structural Directive - This is indicated by *, which indicates that they can upadate DOM element.
*ngFor -
*ngIf- Render content only if expression is true. Its not just hiding by css, but the dom element is not rendered. In case you want to hide the element you can use hidden property([hidden]="some expression")
*ngSwitchCase used along with non structural directive ngSwitch

For evaluating expression you should use ? against the object property which can be null, This will short circuit evaluation of the expression.

Styling
[ngStyle] return class and style. Using this you can style multiple properties of element.
[ngStyle]="{'background-color': 1===2?'green':'blue', 'font-weight': 'bold'}"     -> style="background-color: blue; font-weight: bold;"

You can also use [style.background-color]="'blue'". Here we directly accessing style property of the element. Refer above for property binding


[ngClass] - This allows to update classed on the element. It is specified as key and boolean expression value
<div [ngClass]="{'small-text': 1===1, 'red'}"> => class="small-text red"

Property and Attribute

Property and Attribute are more often used interchangeably. When defining an html element you can specify its attribute and when browser parses code, a corresponding DOM object will be created, which will have its properties. For example the following element has three attributes id, type and value.

<input id="myElement" type="text" value="something" >

Once browser create DOM, this object will contain multiple properties like attributes, className, disabled, hidden, width, id, innerText, innerHtml, value, type, style etc. Write following javascript and inspect different properties on the element. You will notice attribute property which will include add the attribute defined in the html element.

let input1 = document.getElementById("3");

Property and attributes doesn't have one to one relationship, but more often many of the properties relates to attributes.

Projection

This enables to build reusable component by giving option to injecting mark up by consumer of the component. Here we saying that my-component will show <h4> tag as it is in this component and will be shown everywhere where we use my-component. The consumer also has option of passing content which will replace ng-content. You can multiple ng-contect and have selector which will give option to inject multiple content.

@Component({
    selector: "my-component",
    template: `
    <h4>something which will be shown everywhere</h4>
<div (click)="toggelContent()" class="well thubnail">
        <ng-content select=".title"></ng-content>
        <ng-content select="[body]"></ng-content>
    </div>

`,
})


You can use this component like this.

<my-component>
<div class="title">
my title
</div>
<div body>
my body
</div>
</my-component>