29 January 19
An AWS Region is a completely independent entity in a geographical area. There are two more Availability Zones in an AWS Region. Within a region, Availability Zones are connected through low-latency links. Since each AWS Region is isolated from another Region, it provides very high fault tolerance and stability. For launching an EC2 instance, we have to select an AMI within the same region.
The important components of IAM are as follows:
IAM User: An IAM User is a person or service that will interact with AWS. User can sign into AWS Management Console for performing tasks in AWS.
IAM Group: An IAM Group is a collection of IAM users. We can specify permission to an IAM Group. This helps in managing large number of IAM users. We can simply add or remove an IAM User to an IAM Group to manage the permissions.
IAM Role: An IAM Role is an identity to which we give permissions. A Role does not have any credentials (password or access keys). We can temporarily give an IAM Role to an IAM User to perform certain tasks in AWS.
IAM Permission: In IAM we can create two types of Permissions. Identity based and Resource based. We can create a Permission to access or perform an action on an AWS Resource and assign it to a User, Role or Group. We can also create Permissions on resources like S3 bucket, Glacier vault etc and specify who has access to the resource.
IAM Policy: An IAM Policy is a document in which we list permissions to specify Actions, Resources and Effects. This document is in JSON format. We can attach a Policy to an IAM User or Group.
Some of the important points about AWS IAM are as follows:
Some of the important features of Amazon S3 are as follows:
Amazon S3 supports durability at the scale of 99.999999999% of time. This is 9 nines after decimal.
Amazon S3 supports following consistency levels for different requests:
Different Storage tiers in Amazon S3 are as follows:
S3 Standard: In this tier, S3 supports durable storage of files that become immediately available. This tier is used for frequent file storage and access.
S3 Standard -Infrequent Access (IA): In this tier, S3 provides durable storage that is immediately available. But in this tier files are infrequently accessed.
S3 Reduced Redundancy Storage (RRS): In this tier, S3 provides the option to customers to store data at lower levels of redundancy. In this case, data is copied to multiple locations but not on as many locations as standard S3.
Amazon S3 supports storing of objects or files up to 5 gigabytes in a single PUT request. To upload a file greater than 100 megabytes, we have to use Multipart upload utility from AWS. By using Multipart upload we can upload a large file in multiple parts. Each part will be independently uploaded to S3. It doesn’t matter in what order each part is uploaded. It even supports uploading these parts in parallel to decrease overall time. Once all the parts are uploaded, this utility joins these parts as a single object or file from which the parts were created.
Amazon S3 provides DELETE API to delete an object. If we want to delete an objects from a version controlled bucket, we can specify the version of the object that we want to delete. The other versions of the Object still exist within the bucket. If we do not specify the version, and just pass the key name, Amazon S3 will delete the object and return the version id. And the object will not appear on the bucket. In case Multi-factor authentication (MFA) enabled on a bucket, then the DELETE request will fail if we do not specify an MFA token.
Amazon Glacier is an extremely low cost cloud based storage service provided by Amazon. We mainly use Amazon Glacier for long-term backup purpose. Amazon Glacier can be used for storing data archives for months, years or even decades. It can also be used for long term immutable storage for regulatory and archival requirements. It provides Vault Lock support for this purpose. In this option, we write once but can read the same data multiple times. One use case is for storing certificates that can be issued once, and only original person can keep the main copy. But other users can view that copy of the certificate.
No, we cannot disable versioning on a version-enabled bucket in Amazon S3. We can just suspend the versioning on a bucket in S3. Once we suspend versioning, Amazon S3 will stop creating new versions of the object. It just stores the object with null version ID. When we overwrite an existing object, it just replaces the object with null version ID. Therefore, any existing versions of the object still remain in the bucket. But there will be no more new versions of the same object except for the null version ID object.
We can use Cross Region Replication Amazon S3 to make copies of an object across buckets in different AWS Regions. This copying takes place automatically and in an asynchronous mode. We have to add replication configuration on our source bucket in S3 to make use of Cross Region Replication. It will create exact replicas of the objects from source bucket to destination buckets in different regions.
Some of the main use cases of Cross Region Replication are as follows:
Compliance: Sometime, there are laws/regulatory requirements that ask for storing data at farther geographic locations. This kind of compliance can be achieved by using AWS Regions that are spread across the world.
Failover: At times, we want to minimize the probability of system failure due to complete blackout in a region. We can use Cross-Region Replication in such a scenario.
Latency: In case we are serving multiple geographies, it makes sense to replicate objects in the geographical Regions that are closer to end customer. This helps in reducing the latency.
No, we have to enable versioning on a bucket to perform Cross Region Replication.
There are mainly two types of Object Lifecycle Management actions in Amazon S3.
Transition Actions: These actions define the state when an Object transitions from one storage class to another storage class. E.g. A new object may transition to STANDARD_IA (infrequent access) class after 60 days of creation. And it can transition to GLACIER after 180 days of creation.
Expiration Actions: These actions specify what happens when an Object expires. We can ask S3 to delete an object completely on expiration.
If an application is content rich and is being used across multiple locations, we can use Amazon CloudFront to increase its performance. Some of the techniques used by Amazon CloudFront are as follows:
Caching: Amazon CloudFront caches the copies of our application’s content at locations closer to our viewers. Due to caching, our users get our content very fast. Also, caching content reduces the load on our main servers.
Edge/Regional Locations: CloudFront uses a global network of Edge and Regional edge locations to cache our content. These locations cater to almost all of the geographical areas across the world.
Persistent Connections: In certain scenarios, CloudFront keeps persistent connections with the main server to fetch the content quickly.
Other Optimization: Amazon CloudFront also uses other optimization techniques like TCP initial congestion window etc to deliver high performance experience.
A Regional Edge Cache location lies between the main webserver and the global edge location. When the popularity of an object/content decreases, the global edge location may take it out from the cache. But Regional Edge location maintains a larger cache. Due to this the object/content can stay for longer time in Regional Edge location. With this optimization, CloudFront does not have to go back to main webserver to fetch the content. When it does not find an object in Global Edge location, it just looks for it in Regional Edge location. This improves the performance of serving content to users in Amazon CloudFront.
We can get following benefits by Streaming content:
Control: We can provide more control to our users for what they want to watch. In a video streaming, users can select the locations in video where they want to start watching from.
Content: With streaming our entire content does not stay at a user’s device. User receives only the part he or she is watching. Once the session is over, content is removed from the user’s device.
Cost: With streaming, there is no need to download all the content to a user’s device. A user can start viewing content as soon as some part is available for viewing. This saves costs since we do not have to download a large media file before starting each viewing session.
In AWS, we can use [email protected] utility to solve the problem of low network latency for end users. In [email protected] there is no need to provision or manage servers. We can just upload our Node.js code to AWS Lambda and create functions that will be triggered on CloudFront requests. When a request for content is received by CloudFront edge location, the Lambda code is ready to execute. This is a very good option for scaling up the operations in CloudFront without managing multiple servers.
Different types of events triggered by Amazon CloudFront are as follows:
Viewer Request: When an end user or a client program makes an HTTP/HTTPS request to CloudFront, this event is triggered at the Edge Location closer to the end user.
Viewer Response: This event is triggered, when a CloudFront server is ready to respond to a request.
Origin Request: When CloudFront server does not have the requested object in its cache, the request is forwarded to Origin server. At this time this event is triggered.
Origin Response: This event is triggered, when CloudFront server at an Edge location receives the response from Origin server.
In Amazon CloudFront we can detect the country from where end users are requesting our content. This information can be passed to our Origin server by Amazon CloudFront. It is sent in a new HTTP header. Based on different countries we can generate different content for different versions of the same content. These versions can be cached at different Edge Locations that are closer to the end users of that country. In this way, we are able to serve our end users based on their geographic locations to provide a rich user experience.
Some of the main features of Amazon CloudFront are as follows:
Device Detection, Protocol Detection, Geo Targeting, Cache Behavior, Cross Origin Resource Sharing, Multiple Origin Servers, HTTP Cookies, Query String Parameters, Custom SSL
Amazon S3 is a very secure storage service. Some of the main security mechanisms available in Amazon S3 are as follows:
Access: When we create a bucket or an object, only the owner gets the access to the bucket and objects. Authentication: Amazon S3 also supports user authentication to control who has access to a specific object or bucket.
Access Control List: We can create Access Control Lists (ACL) to provide selective permissions to users and groups for S3 objects.
HTTPS: Amazon S3 also supports HTTPS protocol to securely upload and download data from cloud.
Encryption: We can also use Server Side Encryption (SSE) in Amazon S3 to encrypt data.
We can use AWS Storage Gateway (ASG) service to connect our local infrastructure of files etc. with Amazon cloud services for storage.
Some of the main benefits of AWS Storage Gateway are as follows:
Local Use: We can use ASG to integrate our data in multiple Amazon Storage Services like- S3, Glacier etc with our local systems. We can continue to use our local systems seamlessly.
Performance: ASG provides better performance by caching data in local disks. Though data stays in cloud, but the performance we get is similar to that of local storage.
Easy to use: ASG provides a virtual machine that can be used by an easy to use interface. There is no need to install any client or to provision rack space for using ASG. These virtual machines can work in local system as well as in AWS.
Scale: We get the storage at a very high scale with ASG. Backend of ASG is Amazon cloud, therefore, it can handle large amounts of workloads and storage needs.
Optimized Transfer: ASG performs many optimizations, due to which only the changes to data are transferred. This helps in minimizing the use of bandwidth.
AWS Storage Gateway (ASG) is a very versatile product from AWS in its usage. It solves a variety of problems at enterprise level. Some of the main use cases of ASG are as follows:
Backup systems: We can use ASG to create backup systems. Data from local storage can be backed up into cloud services of AWS by using ASG. We can also restore the data from this backup solution on need basis. It is a replacement for Tape based backup systems.
Variable Storage: With ASG, we can grow or shrink our Storage as per our needs. There is no need to add racks, disks etc to expand our storage systems. We can manage the fluctuations in our storage needs gracefully by using ASG.
Disaster Recovery: We can also use ASG for disaster recovery mechanism. We can create snapshots of our local volumes in Amazon EBS. In case of a local disaster, we can use our applications in cloud and recover from the snapshots created in EBS.
Hybrid Cloud: At times we want to use our local applications with cloud services. ASG helps in implementing Hybrid cloud solutions in which we can utilize cloud storage services with our on-premise local applications.
AWS provides a useful service known as Snowball for transporting very large amounts of data at the scale of petabytes. With Snowball, we can securely transfer data without any network cost. It is a physical data transfer solution to store data in AWS cloud. Once we create a Snowball job in AWS console, Amazon ships a physical storage device to our location. We can copy our data to this storage device and ship it back. Amazon services will take the Snowball device and transfer the data to Amazon S3. It is an innovative use of physical, virtual and cloud computing technology for high volume data transfer.
In Amazon EC2, we can even bid for getting a computing instance. Any instance procured by bidding is a Spot Instance. Multiple users bid for an EC2 Instance. Once the bid price exceeds the Spot price, the user with the highest bid gets it. As long as their bid price remains higher than the Spot price, they can keep using it. Spot price varies with supply and demand. Once Spot price exceeds Bid price, the instance will be taken back from user.
Spot Instance and On-demand Instance are very similar in nature. The main difference between these is of commitment. In Spot Instance, there is no commitment. As soon as the Bid price exceeds Spot price, a user gets the Instance.
In an On-demand Instance, a user has to pay the On-demand rate specified by Amazon. Once they have bought the Instance they have to use it by paying that rate.
In Spot Instance, once the Spot price exceeds the Bid price, Amazon will shut the instance. The benefit to user is that they will not be charged for the partial hour in which Instance was taken back from them.
Amazon Elastic Load Balancing (ELB) provides two types of load balancers:
Classic Load Balancer: This Load Balancer uses application or network load information to route traffic. It is a simple approach of load balancing to divide load among multiple EC2 instances.
Application Load Balancer: This Load Balancer uses advanced application level information to route the traffic among multiple EC2 instances. It can even use content of the request to make routing decisions.
Some of the main features of Classic Load Balancer (CLB) in Amazon EC2 are as follows:
Health Check: Based on the result of Health Check, Classic Load Balancer can decide to route the traffic. If any instance has unhealthy results, CLB will not route the traffic to that instance.
Security: We can create security groups for CLB in Virtual Private Cloud (VPC). With these features, it is easy to implement secure load balancing within a network.
High Availability: With CLB, we can distribute traffic among EC2 instances in single or multiple Availability Zones. This helps in providing very high level of availability for the incoming traffic.
Sticky Sessions: CLB also supports sticky session by using cookies. The sticky sessions make sure that the traffic from a user is always routed to the same instance, so that user gets seamless experience.
IPv6: CLB also supports Internet Protocol version 6.
Operational Monitoring: We can also perform operational monitoring in CLB and collect statistics on request count, latency etc. These metrics can be monitored in CloudWatch.
Main features of Application Load Balancer (ALB) are as follows:
Content-Based Routing: In ALB, we can make use of content in the request to decide the routing of a request to a specific service.
HTTP/2: ALB supports the new version of HTTP protocol. In this protocol, we can send multiple requests on same connection. It also supports TLS and header compression.
WebSockets: ALB supports WebSockets in EC2. With WebSockets, a server can exchange real-time messages with the end-users.
Layer-7 Load Balancing: ALB can also load balance HTTP/HTTPS application with layer-7 specific features.
Delete Protection: ALB also provides Delete Protection option by which we can prevent it from getting deleted by mistake.
Containerized Application Support: We can use ALB to load balance multiple containers across multiple ports on same EC2 instance.
In Amazon Web Services, a Volume is a durable, block level storage device that can be attached to a single EC2 instance. In simple words, it is like a hard disk on which we can write or read from. A Snapshot is created by copying the data of a volume to another location at a specific time. We can even replicate same Snapshot to multiple availability zones. So Snapshot is a single point in time view of a volume. We can create a Snapshot only when we have a Volume. Also from a Snapshot we can create a Volume. In AWS, we have to pay for storage used by a Volume as well as the one used by Snapshots.
Amazon EBS provides following two main types of Volume:
Solid State Drive (SSD): This type of Volume is backed by a Solid State Drive. It is suitable for transactional work in which there are frequent reads and writes. It is generally more expensive than the HDD based volume.
Hard Disk Drive (HDD): This type of Volume is backed by Hard Disk Drive. It is more suitable for large streaming workload in which throughput it more important than transactional work. It is a cheaper option compared to SSD Volume.
Some of the Amazon EC instance types provide the option of using a directly attached block-device storage. This kind of storage is known as Instance Store. In other Amazon EC2 instances, we have to attach an Elastic Block Store (EBS).
Persistence: The main difference between Instance Store and EBS is that in Instance Store data is not persisted for long-term use. If the Instance terminates or fails, we can lose Instance Store data. Any data stored in EBS is persisted for longer duration. Even if an instance fails, we can use the data stored in EBS to connect it to another EC2 instance.
Encryption: EBS provides a full-volume encryption of data stored in it. Whereas, Instance Store is not considered as a good storage option for encrypted data.
Amazon provides an Elastic IP Address with an AWS account. An Elastic IP address is a public and static IP address based on IPv4 protocol. It is designed for dynamic cloud computing. This IP address is reachable from the Internet. If we do not have a specific IP address for our EC2 instance, then we can associate our instance to the Elastic IP address of our AWS account. Now our instance can communicate on the Internet with this Elastic IP Address.
We can get following benefits by using Virtual Private Cloud (VPC) in an AWS account:
AWS provides an option of creating a Placement Group in EC2 to logically group the instances within a single Availability Zone. We get the benefits of low network latency and high network throughput by using a Placement Group. Placement Group is a free option as of now. When we stop an instance, it will run in same Placement Group in restart at a later point of time. The biggest limitation of Placement Group is that we cannot add Instances from multiple availability zones to one Placement Group.
Amazon CloudWatch is a monitoring service by Amazon for cloud based AWS resources. Some of the main options in Amazon CloudWatch are as follows:
Logs: We can monitor and store logs generated by EC2 instances and our application in CloudWatch. We can store the log data for time period convenient for our use.
Dashboard: We can create visual Dashboards in the form of graphs to monitor our AWS resources in CloudWatch.
Alarms: We can set alarms in CloudWatch. These alarms can notify us by email or text when a specific metric crosses a threshold. These alarms can also detect the event when an Instance starts or shuts down. Events: In CloudWatch we can also set up events that are triggered by an Alarm. These events can
In AWS, we can create applications based on AWS Lambda. These applications are composed of functions that are triggered by an event. These functions are executed by AWS in cloud. But we do not have to specify/buy any instances or server for running these functions. An application created on AWS Lambda is called Serverless application in AWS.
We can use AWS Serverless Application Model (AWS SAM) to deploy and run a serverless application. AWS SAM is not a server or software. It is just a specification that has to be followed for creating a serverless application. Once we create our serverless application, we can use CodePipeline to release and deploy it in AWS. CodePipeline is built on Continuous Integration Continuous Deployment (CI/CD) concept.
AWS Lambda is a service from Amazon to run a specific piece of code in Amazon cloud, without provisioning any server. So there is no effort involved in administration of servers. In AWS Lambda, we are not charged until our code starts running. Therefore, it is a cost effective solution to execute code in cloud. AWS Lambda can automatically scale our application when the number of requests to run the code increases. Therefore, we do not have to worry about scalability of application while using AWS Lambda.
Web Application: We can integrate AWS Lambda with other AWS Services to create a web application that can scale up or down with zero administrative effort for server management, backup or scalability.
Internet of Things (IoT): In the Internet of Things applications, we can use AWS Lambda to execute a piece of code on the basis of an event that is triggered by a device.
Mobile Backend: We can create Backend applications for Mobile apps by using AWS Lambda.
Real-time Stream Processing: We can use AWS Lambda with Amazon Kinesis for processing real-time streaming data.
ETL: We can use AWS Lambda for Extract, Transform, and Load (ETL) operations in data warehousing applications. AWS Lambda can execute the code that can validate data, filter information, sort data or transform data from one form to another form.
Real-time File processing: AWS Lambda can also be used for handling any updates to a file in Amazon S3. When we upload a file to S3, AWS Lambda can create thumbnails, index files, new formats etc in real-time.
In AWS Lambda we can run a function in synchronous or asynchronous mode. In synchronous mode, if AWS Lambda function fails, then it will just give an exception to the calling application. In asynchronous mode, if AWS Lambda function fails then it will retry the same function at least 3 times. If AWS Lambda is running in response to an event in Amazon DynamoDB or Amazon Kinesis, then the event will be retried till the Lambda function succeeds or the data expires. In DynamoDB or Kinesis, AWS maintains data for at least 24 hours.
Route 53 service from Amazon provides multiple options for creating a Routing policy. Some of these options are as follows:
Simple Routing: In this option, Route 53 will respond to DNS queries based on the values in resource record set.
Weighted Routing: In this policy, we can specify the weightage according to which multiple resources will handle the load. E.g. If we have two webservers, we can divide load in 40/60 ration on these servers.
Latency Routing: In this option, Route 53 will respond to DNS queries with the resources that provide the best latency.
Failover Routing: We can configure active/passive failover by using this policy. One resource will get all the traffic when it is up. Once first resource is down, all the traffic will be routed to second resource that is active during failover.
Geolocation Routing: As the name suggests, this policy works on the basis of location of end users from where requests originate.
Amazon DynamoDB is a highly scalable NoSQL database that has very fast performance. Some of the main benefits of using Amazon DynamoDB are as follows:
Administration: In Amazon DynamoDB, we do not have to spend effort on administration of database. There are no servers to provision or manage. We just create our tables and start using them.
Scalability: DynamoDB provides the option to specify the capacity that we need for a table. Rest of the scalability is done under the hood by DynamoDB.
Fast Performance: Even at a very high scale, DynamoDB delivers very fast performance with low latency. It will use SSD and partitioning behind the scenes to achieve the throughput that a user specifies.
Access Control: We can integrate DynamoDB with IAM to create fine-grained access control. This can keep our data secure in DynamoDB.
Flexible: DynamoDB supports both document and key-value data structures. So it helps in providing flexibility of selecting the right architecture for our application.
Event Driven: We can also make use of AWS Lambda with DynamoDB to perform any event driven programming. This option is very useful for ETL tasks.
The basic Data Model in Amazon DynamoDB consists of following components:
Table: In DynamoDB, a Table is collection of data items. It is similar to a table in a Relational Database. There can be infinite number of items in a Table. There has to be one Primary key in a Table.
Item: An Item in DynamoDB is made up of a primary key or composite key and a variable number of attributes. The number of attributes in an Item is not bounded by a limit. But total size of an Item can be maximum 400 kilobytes.
Attribute: In DynamoDB, we can associate an Attribute with an Item. We can set a name as well as one or more values in an Attribute. Total size of data in an Attribute is maximum 400 kilobytes.
Amazon DynamoDB supports both document as well as key based NoSQL databases. Due to this APIs in DynamoDB are generic enough to serve both the types.
Some of the main APIs available in DynamoDB are as follows:
CreateTable UpdateTable DeleteTable DescribeTable ListTables PutItem GetItem BatchWriteItem BatchGetItem UpdateItem DeleteItem Query Scan
Amazon DynamoDB is used for storing structured data. The data in DynamoDB is also indexed by a primary key for fast access. Reads and writes in DynamoDB have very low latency because it uses SSD.
Amazon S3 is mainly used for storing unstructured binary large objects based data. It does not have a fast index like DynamoDB. Therefore, we should use Amazon S3 for storing objects with infrequent access requirements.
Another consideration is size of the data. In DynamoDB the size of an item can be maximum 400 kilobytes. Whereas Amazon S3 supports size as large as 5 terabytes for an object.
In conclusion, DynamoDB is more suitable for storing small objects with frequent access and S3 is ideal for storing very large objects with infrequent access.
Amazon ElastiCache is mainly used for improving the performance of web applications by caching the information that is frequently accessed. ElastiCache webservice provides very fast access to the information by using in-memory caching. Behind the scenes, ElastiCache supports open source caching platforms like-Memcached and Redis. We do not have to manage separate caching servers with ElastiCache. We can just add critical pieces of data in ElastiCache to provide very low latency access to applications that need this data very frequently.
Amazon Kinesis Streams helps in creating applications that deal with streaming data. Kinesis streams can work with data streams up to terabytes per hour rate. Kinesis streams can handle data from thousands of sources. We can also use Kinesis to produce data for use by other Amazon services. Some of the main use cases for Amazon Kinesis Streams are as follows:
Real-time Analytics: At times for real-time events like-Big Friday sale or a major game event, we get a large amount of data in a short period of time. Amazon Kinesis Streams can be used to perform real time analysis on this data, and make use of this analysis very quickly. Prior to Kinesis, this kind of analysis would take days and weeks. Whereas, now within a few minutes we can start using the results of this analysis.
Gaming Data: In online applications, thousands of users play and generate a large amount of data. With Kinesis, we can use the streams of data generated by a large number of online players and use it to implement dynamic features based on the actions and behavior of players.
Log and Event Data: We can use Amazon Kinesis to process the large amount of Log data that is generated by different devices. We can build live dashboards, alarms, triggers based on this streaming data by using Amazon Kinesis.
Mobile Applications: In Mobile applications, there is wide variety of data available due to the large number of parameters like- location of mobile, type of device, time of the day etc. We can use Amazon Kinesis Streams to process the data generated by a Mobile App. The output of such processing can be used by the same Mobile App to enhance user experience in real time.
Amazon SQS stands for Simple Queue Service. Whereas, Amazon SNS stands for Simple Notification Service.
SQS is used for implementing Messaging Queue solutions in an application. We can de-couple the applications in cloud by using SQS. Since all the messages are stored redundantly in SQS, it minimizes the chance of losing any message.
SNS is used we can deliver messages to Amazon SQS, AWS Lambda or any HTTP endpoint. Amazon SNS is widely used in sending messages to mobile devices as well. It can even send SMS messages to cell phones.
Alexa is a service from Amazon for supporting business operations. It is like an intelligent assistant. Business users can get benefits like managing schedule, dialing into conference calls, remote controls etc by Alexa. The biggest benefit of Alexa is voice enabled features. This helps in keeping people effective by using voice as an input output interface. Users can use shared devices like printers etc at work place by giving voice commands to them
Alexa provides support for building custom skills. These skills are available for use on shared devices to users that are enrolled to use these skills. Some of the examples of custom skills are: automation of helpdesk ticket creation, opening of doors for guests, order food for team events from pre-authorized vendors etc. There is a Alexa Skills Kit with APIs and directions to build custom skills.
Amazon provides support for Deep Learning to machine learning projects. In case we want to use an AWS EC2 instance for deep learning, we can switch to AWS Deep Learning AMIs. AWS Deep Learning AMI has built-in support for Conda and it comes with pre-installed Python environment. It also includes popular pip package for deep learning framework. We can also use Jupyter notebook in conjunction with AWS Deep Learning AMI to get a visual interface.