AWS Global Infrastructure
you have Regions which is simply location e.g. US
Regions has 2 or more AZ which is a data center
you have many Edge Location = CDN = Cloud front
VPC: virtual data center
DirectConnect: AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations, so no internet involves ==> fast connection
Rout53: DNS (53 is the DNS port)
EC2 Container Service: for docker container
Elastic BeansTalk: deploying and scaling web applications written in java .net php ..., you do nothing, just upload your application and Elastic BeansTalk will deploy it
Lambda: run code, you will be charge when the code is running (their website is build using Lambda, no EC2)
Glacier: archiving and long term back up, 3-5 hours to access your file
S3: store objects
Cloud Front: CDN
EFS: elastic file system, it is a file system on cloud
snowball: import export from hard disk, not over internet, suitcase looking device will send to you, you will copy your data to this device then send it back to whatever region you like, you pay for it on daily basis.
storage gateway: a way to back up your locat hard disk to the cloud
RDS: mysql & oracle
Dynamodb: NoSql database
ElasticCache: to cache database in cloud
Redshift: data warehousing in the cloud
DMS: Database Migration Service ( migrate from mysql to oracle for example )
EMR: elastic map reduce
Data pipeline: move data from one AWS service to another AWS service
ElasticSearch: operate elastic search in cloud
Kinesis: streaming data in AWS,which means it can receive a stream of data like logs and do some analyzing on this data
Quick Sight: it is a (BI) business intelligence service, Add hoc analytics.
Security & Identity:
IAM: identity management
Directory Service:run Microsoft active directory on cloud
Inspector: some suggestion to increase security
WAF: Web application firewall
Cloud HSM: Hardware security module
KMS: key management service
Cloud Watch: create metrics
Cloud Formation: Create a template to build your AWS resources
Cloud Trail: audit access on what you are doing on AWS services
OPS WORKS: configuration management using Chef
CONFIG: configuration history and configuration management, and apply roles
Service Catalogue: Create a catalogue of service that your business has approved to be in AWS
Trusted Advisor: scan your environment and tell you how to save money and increase security.
API Gateway: create publish and maintain APIs
APP Stream: deliver your Windows applications to any device.
Cloud Search: search solution for your website
Elastic Transcoder: transcoding service in cloud, for media files
SES: simple email service, for sending emails and receiving
SQS: simple queue service
SWS: Simple workflow service
Code Commit: like git
Code Deploy: Automate code deployment
Code Pipeline: continues delivery service, like jenkins
Mobile Hub: for checking the usage of your mobile app
Cognito: save your data, like gaming data on cloud
Device Farm: test your app on all mobiles
Mobile Analytics: analyse the usage
SNS: simple notification service, for push notification
Workspace : virtual desktop in the cloud
WorkDocs: like dropbox for enterprise
WorkMail: email and calander service
Internet of Things
Internet of Things:
Manage user and their access
what does IAM give you?
you can manage users,
you can have identity federation, which means you can connect IAM to active directory or facebook or ...
you can have multi factor authentication
you have PCI DSS compliance (Payment Card Industry Data Security Standard)
what are the mean terms in IAM?
2- GROUPS: users can set under groups
3- ROLES: you create a role and assign it to AWS resource, for example a role to access dynamodb, it can be assigned to EC2
4- POLICIES: it is a list of permissions, it can be assigned to USERS, GROUPS and ROLES
what is the default region and how to change it?
when you login to AWS console, on the top you have a region that you are assigned to, it is better to change this region to something close to you
is identity access region specific or not?
Identity access management is not region specific, so when you do that it will apply to all regions
so when you create roles profiles ... they are applied to all regions
which url you should send it to users to login to AWS console, how to make it friendly
after accessing the IAM page from the console, on top you will have a url that you can send to your team to let them access, you can edit this link to make it friendly, however the name should be unique
what is a root account
the email addresss you used when you setup your account,
it has a full administration access
best practice, dont use the root account to access
how to activate MFA on root account
from the IAM page press Activate MFA on your root account
you will have a wizard, what you should know is
1- you have to type of MFA, software (Google Authinticater) and hardware (you should buy a device)
How to create a new IAM user
from the IAM page choose Create Individual IAM users
you will have a wizard what you should know is
1- during the wizard it will you will check "Generate an access key for each user" option
this option will give the user an Access Key ID and Secret Access key (it is like username/password)
BE CAREFUL YOU WILL SEE THESE INFORMATION ONLY ONCE, SO KEEP THEM SAFE, OTHERWISE YOU SHOULD GENERATE THEM AGAIN
Why is the next step after creating a user?
after that the user will be created but he will not be able to access because he doesnt have a password
to assign a password go to IAM page, from the left side choose users, then from UserActions dropdown menu choose Manage Password
from there you can assign an auto-generated password to users and ask them to change the password when they log in
get the password and send it to the user.(you can use SLACK to send password safely)
why you need the Access key id & secret access key
you use them to access the AWS APIs
we create a user, he logged in, however he cannot do anything, why?
by default users have no permission when you create them, they can do nothing, so you have to give them permission
how to give users permissions
from the IAM page, left hand pane, go to Polices, from there you can attache policy to users
what is a policy
a list of permissions writter in JSON format
any other better way to give permissions
yes, create a group, assign policy to a group (part of the wizard) then add user to a group.
This is more organised.
REMEMBER YOU CAN ASSIGN POLICY TO USERS, ROLES AND GROUPS
IAM Password policy: one number, length
allow resources to access other resources
How to create a password policy
from IAM page choose "Apply IAM Password Policy"
what are ROLES
Allow resources in AWS to access other resources without user name & passwords or any keys
how to create ROLE
also from the IAM left side, choose ROLE, in the wizard you will choose ROLE name, attach policy to role and you will choose ROLE TYPE
what are the ROLE TYPES
AWS SERVICE ROLE
ROLE FOR CROSS ACCOUNT ACCESS
ROLE OF IDENTITY PROVIDER ACCESS
we will talk about them later
VERY IMPORTANT WHAT IS THE DIFFERENCE BETWEEN ADMIN USER AND POWER USER??
Admin user has full access to everything
Power user has access to everything except managing groups and users in IAM
1- use it for uploading files
2- you store in Buckets, which is like directory
you are creating a bucket and you get an error that the name is used, why?
3- you will have a universal URL (universal name space) to access your S3
very important: the bucket name is global unique name
what is the object size in S3
from 1byte to 5 TB
do you have a limit of S3 sotrage
No, it is unlimited
what is the Consistency Model in S3
read after write when you PUT a new object (which means you will read the object you put directly)
Eventual Consistency when you update (overwrite PUT) or delete
what are the main parts that the object has in S3
1- Key: the file name
2- Value: the file itself
3- Version Id: this is important when you want to do versioning
4- Metadata: data about data, like creation date
6- access control list: for managing access
what are the types of S3 storage
1- S3: availability 99.99, durability 99.999999999, will be stored so it handles the loss of 2 availability zones at the same time
2- S3 Infrequently Accessed: simial availability and durability. it is used for data that accessed less frequently, like every week or month, lower fee than s3, you will be charged based on retrieval
3- Reduced Redundancy Storage: availability 99.99, durability 99.99, cheaper, handle the loss of 1 availability zone
4- Glacier : for archiving, you retrieve data in 3 to 5 hours
how are you charged in S3
Data Transfer Pricing
you want to store an opertaing system on cloud, is S3 a correct choice??
how to create bucket
from AWS console go to S3 page, click create bucket
YOU SHOULD CHOOSE THE REGION OF YOUR BUCKET
how to enable logging, and where can you put the log
from AWS, go to S3 page, from there choose your bucket, you have Logging properties.
when you enable logging, is the log stored in the same bucket??
you can add the log to the bucket itself or (which is better) put it in another bucket.
what is event in S3, how to create it
you can trigger an event when something is hapenning in s3, like adding deleting and so on.
the event could be do something via SNS, SQS or run a lambda function
also go to your bucket properties and check Events
what is Requester Pays option
it means that the one who requests something from S3 should pay not the bucket owner
how to upload a file to S3 bucket, and why cannot you view the document after you upload
you can simply upload using the upload button
after you upload the file is private, that is why you cannot access it
how to change the file permission
you can open the file and add permission in properties
do you change from S3 to Reduced Redundency on the bucket level or file level
you can do it on the file level, you dont have to change all the bucket,
you can do that from the file properties, you can also change the encryption
how to enable versioning and how can you disable it
go to your pucket, from properties you have versioning you can enable it
when you enable versioning you cannot turn it off you can only suspended
how to version delete and retrieve a versioned file
you can upload a file
when you delete it you will have "Delete Marker"
as you can see "delete marker" is on top, which means this is the current state of the document
to go back to previous version check the current version and delete it
what is cross region replication and how to do it
you can replicate a bucket across region, you have to enable versioning first
you can do it from bucket properties
2- choose the destination bucket
3- choose destination storage class (is it reduced redunedency or ...)
4- create select IAM Role, this is for giving permission, just a wizard
5- click save
after doing corss-region replication, there is nothing in the destination bucket, why
when you do cross region replication, the current files will not be replicated, you should do that by yourself, only future files will be replicated.
How to manage s3 life cycle
you can manage the life cycle of your s3 bucket and object,
open the bucket, from properties you have life cycle, from there you can add a role
you can add a role to the whole bucket, or to a specific folder
what is the life cycle option that you have when versioning is not enabled
- the first option is when version is not enabled, here you have the following
1- Transition to standard Infrequent S3: which means after certain amount of days convert the object to Infrequent S3
2- Archive to Glacier: after certain amount of days go to glacier
3- permanently delete: after certain amount of days delete the object
what are the life cycle restrictions
1- transition to infrequest: at least after 30 days + at least 128kb
2- Archive to Glacier: if you set transition to infrequent -> archive to glacier should be at least 30 days after infrequent
for example infrequent is 37 -> archive glacier should be at least 67
what are the life cycle option that you have when versioning is enabled
- actions on the current version:
1- transition to infrequent
2- archive to glacier
3- Expire: which means a delete marker will be set on the object ( it will not be deleted permanently)
- actions on the previous version:
1- transition to infrequent
2- archive to glacier
3- permanently delete
how can you delete permanently in case versioning is enabled
you should set expire and permanently delete
can you move an object to Reduced S3 storage as part of life cycle management?
NO NO, there is no such an option
what about static web hosting on S#
you can create static website using s3, you can check this from bucket properties
why versioning will cost you more?
because you will store multiple versions which means extra space.
what about s3 and MFA?
you can turn multi factor authentication on object delete, so by that it will ask you to enter a number to delete an object
AMAZON CLOUD FRONT CDN
what are the main things that you should know when you talk about CLOUD FRONT
1- Edge Location: the server that will cache the content
2- Origin: the orginal file location, could be s3, ec2, load balancer, rout s3. it also could be something not in AWS (i.e. your own thing)
3- Distribution: a collection of edge location create a distribution, you have 2 types, web distribution for websites and RTMP for media streaming
4- you have TTL (time to live) for objects
5- you can delete cached objects
6- you can have multiple origin to a distribution,
is edge location read only
very important, you can write to edge location, and then the edge location will write back to origin
how to create a web distribution
from management console choose cloud front, press create distribution, then web distribution
you will have a form with many values:
Origin Domain Name: this is the origin of the file, choose one of your S3 buckets or EC2
Origin Path: you can put a folder in S3 bucket
Origin ID: as you can have multiple origins in the same distribution, the origin ID can help you to distinguish between them
Restrict Bucket Access: it means that you can now restrict access to the actual S3 bucket, which means you cannot access it directly, you can access it only through the distribution.
you have other options, which are not important
now you create a web distribution but you cannot access the content of S3, WHY?
when you create a distribution, the form asks you if you want to give a read permission,
1- Path Pattern(): the pattern to cache, for example only pdf
2- Viewer Protocol Policy: you can choose http, https or other options
3- Allowed HTTP methods: you can choose the http method,
NOTE:you should set the put option in order to upload files
4- Cached HTTP methods: Get and Head are cached by default, you can choose Options method as well
5- Object Caching: you can use the origin cache header, or your own header
6- Minimum TTL: in seconds
7- Maximum TTL: in seconds
8- Default TTL: in seconds
9- forward cookies & forward querystring: should cloudfront pass cookies and querystring to origin
10- smooth streaming: if you want to use microsoft smooth streamin
11- Restrict Viewer Access (Use Signed URLS or Signed Cookies): with this option you can restrict access to your training videos for example to only specific users.
12- compress object automatically: for compression
if you choose no, you should go and update the policy by yourself
then you have another option for distribution settings:
1- Price Class: you can use where you will have your edge locations, "Use Only US and Europe", " Use only US and Europe and Asia", "Use all Edge Locations"
2- AWS WAF Web ACL: this is to use firewall
3- Alternate Domain Names: you can provide a url here, this will be covered in Route 53
4- SSL Certificate: use SSL
6- logging: many options to turn logging on
how to create multiple origins for a distribution
aws console, cloud front page, open your distribution, choose origin from top, you can add origin here
create geographical restriction
you can restrict users from specific countries, same as the picture before, choose restrictions
How to invalidate object without waiting for TTL
from the picture above use Invalidations
S3 SECURITY AND ENCRPYPTION
how security is handled in S3
1- by default all new buckets are private
2- you can setup access control useing:
a- bucket policy: apply this to whole bucket
b- Access control list: this is on individual object
3- you can create access log which log all requests made to s3 bucket
how encryption is handled in S3
we have these type of encryption
1- In Transit: this is when you send file from your computer to S3 (when you upload the object), this is secured by SSL /TLS (it uses https)
2- At rest: you have server side encryption and client side encryption,
server side encryption can be done in 3 ways:
a- S3 Managed Keys, SSE-S3 : Amazon will handle all the keys for you, each object will be encrypted with a unique key, and they will then also encrypt the key with a master key and rotate the master key.
b- AWS Key Management Service, SSE-KMS: similar to SSE-S3 with extra stuff, you will pay more, you have something called envelope key which protect your key, you will have also Audit trail
c- Server Side Encryption With Customer Provided Keys, SSE-C: you managed the keys by your self
client side encryption means that you do the encryption on the client side then upload it
AWS STORAGE GATEWAY
it is a way to connect your datacenter with AWS,
AWS Sotrage gateway is a virtual client you install it in your data center, it asynchronously replicate data to AWS, usually Glacier.
AWS Storage gateway software is available as VM Image
what are the types of Gateways you have
1- Gateway Stored Volume: your data is stored in your data center, the data asynchronously replicated to S3
2- Gateway Cached Volume: your data is stored in S3, only the most frequently accessed data is cached in your data center (save cost, if you dont have internet there will be a problem)
3- Gateway Virtual Tape: if you have traditional tape infrastructure, you can virtualize that and make your data stored in AWS, you can use popular backup applications like netBackup
how Gateway is connected with AWS
internet or direct connect
what are the Import/Export types
we have 2 services
1- IMPORT/EXPORT Disk: you send your disk to amazon they will do the import and export for you,
important: you can import data to s3,ebs and glacier, export to S3
2- Import/export snowball: amazon will send you a device like a bag, you put the data and send it back to amazon, very secure and cost effective.
important: you can import/export data only to or from s3.
in general amazon advise to use snowball over disk.
S3 Transfer Acceleration
what is Transfer Accelration
new service, it uses cloudfront to accelrate the update speed, rather than uploading directly to s3 you upload to edge location (cloud front) much faster.
in order to enable acceleration
go to your s3 bucket properties and enable the option
How many S3 buckets can you have by default per account
you are uploading a 7.5 GB but getting "your upload exceed the max size" what should you do?
you should use multi part upload api for your objects
what is the max file size that you can upload in on PUT request
what is the url for a bucket called "xxxx" created in EU west region
what are the price options
On Demand: normal one, per hour
Reserved: reserve a machine for 1 or 3 years term
Spot: this is bidding
what will happen if you terminate the spot instance or if amazon terminates it
if amazon terminates the instance, you will not be charged for partial hour of usage
if you terminate the instance, you will be chanrged.
what are the EC2 types?
DIRT MCG, this is the acronym
D: Density storage
I: for IOPS
R: for RAM
T: cheap (T2 micro)
M: main choice for general purpose
C: (cluster instance) for compute and high network performance
what is EBS?
ebs is simply a disk on the cloud, you can attache EBS to only one EC2 instance, however an EC2 could have multiple EBS
what are the types of EBS?
1- General Purpose SSD: 3 IOPS (input output per second), per GB, up to 10000 IOPS
2- Provisioned IOPS SSD: more than 10000 IOPS, optimal for NoSQL
3- Magnetic: this is the normal disk not ssd
what is the availability of General Purpose SSD?
GET OUR HANDS DIRTY
how to start a new EC2 instance
while starting an instance you should fill this form
you should set the number of instances
you should set the purchase option, which could be request spot instance.
Network,Subnet and Auto-assign IP: will talk about them later in VPC
IAM Role: you can assign an IAM rule here
Shutdown Behaviour: is it stop or Terminate
Enable termination protection: here when you try to terminate it will tell you remove this option before you terminate.
Monitoring: this is for cloud watch
Tenancy: is it shared or dedicated instance, will talk about it later
User data: here you put a bash script which will run when the instance starts
also in the next step you will set up an ebs for your instance
you set the size the IOPS, you also set have an option which is Delete on Termination
as you can see the default EBS is the root version, in this EBS, Encrypted option is always Not Encrypted, if you want an encrypted one you should add a new one
the default EBS shown above is there for your operating system
then you can add tags to your instance
then you should set your security group
in the image above we created a new security group with ssh and http open
after that you need to create a key in order to be able to connect to your instance using Putty for example
as you can see you set the key name, and you download the key
now you can launch the instance and you will have this information
you need the public ip and Public dns to connect to the instance
you created an EC2, now you want to assign an IAM to this EC2, how can you do that
you cannot, you can assign IAM role only while you create the instance not after creation
how to use putty to connect
when you download the key, the key will be in .pem format, putty doesnt understand this format, it undersant ppk format
you should use PuttyGen software inorder to convert .pem to .ppk
after you convert it, you can open Putty, use the Host Name ec2-user@TheInstancePublicIP
you should also import your ppk files you can do it here
how to create a security group
you can go to the EC2 instance page, from here you can you can create a security group
if you change a security group value, does it take effect immediately?
what are the default values for security groups
all inbound traffic is blocked
all outbound traffic is open
what does STATEFULL mean in security group
it means if you allow http for inbound, you dont have to allow it in outbound to get the response back.
VOLUMES AND SNAPSHOTS
what is the difference between a volume and snapshot
volume is what you create on EBS, so when you add an EBS you are adding a volume.
snapshot, is basically taking a snapshot of your volume and put it in S3.
what does it mean that snapshots are incremental?
it means that if you take a snapshot for a volume, the first one is gonna be a full snapshot, however the next one will be only the difference between the first one and the current state.
why the first snapshot takes time
because it is a full copy of the volume
how to create a volume
from the console go to EC2 page from here you have Volumes, open it, here you can create a new volume, delete, attache volume to instance, create a snapshot and so on..
what are the linux steps that you should do after you create a new volume and attach it to instance
1- when you attach a volume to an instance, AWS will tell you the path of the volume
2- now when you go to linux run this command "lsblk" to check the running volums
you sill see your new created volume
3- check if the volume has a file system:
file -s /dev/sdf (or on some machines as mentioned in the Note above, /dev/xvdp)
if the return value is "data", this means that there is no file system
4- format the volume (which basically means create a file system)
mkfs -t ext4 /dev/xvdf
5- create a directory
6- then mount this volume to the directory
mount /dev/xvdf /fileserver
now you can go to
and start working
7- you can unmount the volume whenever you want
how to check your snapshots
also from EC2 page there is a link to snapshots
how to create a volume from snapsht
go to snapshot page and do Action->Create Volume
i created a snapshot of Magnatic volume, can i now create an ssd volume from the snapshot
yes you can, and this is a nice thing about a snapshot, it is a way to move your data from magnatic to ssd volume
talk about Snapshots and Encryption
if the volume is encrypted --> the Snapshot will be automatically encrypted
if the snapshot is encrypted --> the restored vaolume is encrypted
can you share encrypted snapshot
no, only unencrypted snapshot can be shared
talk about the root volume and how to create a snapshot of that volume
this is the volume that will be there when you create a new instance
to create a snapshot of that volume you should stop the instance
can the root volume be encrypted by default
yes, this is a new thing now in amazon. you needed a 3rd party tool before
WINDOWS EC2 & RAID GROUP
what is RAID, what are the known types
RAID: is redundant array of Inexpesive Disks.
RAID 0: use 2 hard disk, when you save something half of it will be on disk 1 the other half will be on disk 2, double read/write speed. losing one disk means losing everything.
RAID 1: is mirroring, the performance is the performance of one hard disk, RAID 1 doesn't mean slower performance. losing one drive is fine.
RAID 10: mix RAID 1 and RAID 0, you need 4 hard disks.
RAID 5: this will use an algorithm to rebuild the data on lose, you need at least 3 disks, one of them will be reserved for storing information to rebuild the data
RAID 6: more reliable version that RAID 5, you can survive 2 disks lose.
is it recommended to use RAID 5 on AWS?
NO, never use raid 5, it it not recommended
when to use RAID on AWS?
when the I/O performance is not good, sometimes you are using your own database installation (e.g. cassandra) and the I/O performance is not good, then you can use RAID
how to access windows instance on cloud using Remote Disk Top
1- you need to enable RDP protocol for your security group
2- when you access RDP, you need username and password, the username is "administrator" however you should create a password.
to do that go to your instance and choose Get Windows Password
you need your private key or the .pem file to get a password
3- now use the public ip, administrator as username and the generated password.
how to create RAID array in windows instance
after you add EBS to the instance, connect to your windows machine using remote disktop, go to disk management, from there you can create RAID array
how to take a snapshot of RAID array safely
there is a problem when you take a snapshot from RAID array, as the application and OS working they are using cache, when you take a snapshot for RAID you should ensure that this cache is flushed otherwise the snapshot will not work
to do that you can
1- unmount your raid array then take a snapshot
2- or shutdown your EC2 and take a snapshot
CREATE an AMI
what is AMI?
this is AMAZON Machine Image, to start an instance you need an ami,
how to create an AMI?
you create an AMI from your root EBS volume, this is the volume where you store your operating system, applications and so on
1- go to aws console, EC2 page, then volumes
2- from here create a snapshot of the root volumes
3- go to the snapshots page
4- from here select your snapshot and choose "Create Image from EBS snapshot"
5- after that you will find your image ready in AMIs section.
where is your AMI stored?
it is stored in amazon S3
how to make an AMI accessible by the whole world, and what should you do to be safe
you should make the AMI public, you can do that from the AMIs page
you should delete all the important stuff, like keys, bash history and so one before creating an AMI and make it public
can you share an AMI with people without making it public
you can keep the AMI private and share it with only few users, you can do that from the AMIs page.
how to make AMI public, private or share it with peopel
AWS console, ec2 page, AMIs section.
here go to action-> Modify Image Permissions
you created an AMI in Ierland, you changed the region to Australia, you want to use the AMI but you cannot find it, what happened ?
AMIs are regional, if you create it in Ireland it will only appear in Ireland, in order to see it in Australia you should copy it to that region, you can do that from the console or using EC2 api
AMI TYPES EBS VS INSTANCE STORE
based on what can you choose your AMI
when you choose AMI you can choose them based on Region, Operating System, Architecture (32 or 64 bit), Launch permission, and root storage type
what are the root storage type AMI?
1- EBS: the root storage will be stored on EBS
2- Instance Store: or Ephermal storage
you created an Instance Store AMI, can you attache an Instance store storage to that AMI, i.e after the creation
no, you can only add Instance store storage while creating the AMI not after creation
how many Instance store can you attache to Instance Store AMI
what is the difference between EBS AMI and Instance Store AMI
1- with instance store you cannot stop the instance, you can only terminate or restart, with EBS AMI you can stop and terminate
2- if you go to the Volumes section, you will find your EBS volume but there is no Instance Store, which means you can attache and deattach EBS volume but not Instance Store
do you lose your data on EBS or Instance Store when you rebote
no, you dont
what will happen to EBS root Volume and Instance Store volume when you terminate
root volume will be deleted on terminate.
however with EBS you have an option to tell AWS to keep the volume
LOAD BALANCER AND HEALTH CHECK
how to create a load balancer
from AWS console, go to EC2, there is a section for load balancer
while creating a load balancer you should choose if you want an internal or external load balancer
you can also choose the port that the load balancer is listnening to and the port it is going to transfer to.
you should assign a security group
you can also add https certificate to make it secure
then you should configure the healthcheck
then you should add the EC2 instances under the load balancer
what is the difference between internal and external load balancer
external load balancer means it is internet facing,
internal balancer, it means it is used internaly without internet facing
how to configure the health check for load balancer
you should set the ping path, the port, the response time and so on
what is the "enable cross zone load balancing"
this is an option you will see when you create a load balancer in the add ec2 instances section
it means that the balancer will divide the load evenly between instances in all availability zones
what is the "enable connection draining"
this is an option you will see when you create a load balancer in the add ec2 instances section
the option allows the load balancer to stop sending requests to the unhealthy or de registered instances,
when you say connection draining is 300 seconds, it means that the load balancer will keep the current connection open for 300 seconds with the de registered instances.
now you have a load balancer what ip should you use to call it
load balancer doesnt have a public ip address like EC2 instances it only have dns name, so you will use the dns name
under load balncer, what are the status of the EC2 instances
ec2 instance is either "In Service" or "Out of Service"
CLOUD WATCH EC2
when you create a new instance, there is an option "enable cloud watch detailed monitoring", what does it mean?
each EC2 instance come with basic monitoring which polls data from EC2 every 5 minutes
with details monitoring, data will be polled every 1 minute.
extra charge may apply
how to use cloud watch monitoring
from aws console, choose cloud watch
in this page what you do is creating a dashboard and in this dashboard we add widgets, the widgets give you information about the status of your system (e.g. your ec2, ebs ...)
what are the type of widgets we have
metric (graph) and text
what are the available metrics we have for EC2 instances
you have metrics about CPU, DISK, NETWORK and Status. there is no memory information
what is cloud watch alarm,
you can create an alarm so when something happens like CPU over 80% for 5 hours, send me an email or start a new EC2 instance
what is Cloud watch event
you can also create an event when the state of your resources is changing, the event may call a Lambda function to register a new instance in DNS for example
or create a snapshot or ....
what is cloud watch log
could watch has an agent which you can install in your system in order to send your log to cloud watch, after that you can monitor and check the log from cloud watch
THE AWS COMMAND LINE AND EC2
what is amazon command line
you can use the command line to do some stuff with your AWS installation, like checking s3 and other stuff
who to use the command line
use amazon ami to create an instance, the command line is installed before with this ami, if you are using redhat it might not be installed
how to setup the command line to make it work
1- connect to your ec2 instance using putty
2- type "aws configure"
it will ask you about your AWS Acess key and AWS secret key (these keys that you will get when you create a new user)
also it will ask you about your default region name
how to list all your S3 buckets using the command line
"aws s3 ls"
how to check for help
"aws s3 help"
where your credentials (the access key and secret access key you added before) are stored, why this is not secure?
you can "cd ~" then "cd .aws"
here you will find a credential folder
this is unsecure, because if anyone get access to this machine they can get these keys and start using them, that is why you must not save these credintials in your ec2 instance, you can use roles, we will check that later
USING IAM ROLES WITH EC2
how can you assign a role to your EC2
while creating a new instance, there is a form to assign an IAM role to the instance, by doing that you dont have to store your access and secret keys in the EC2 instance
you created an IAM role in Sydney region, now you are creating an EC2 instance in Tokyo, can you assign the Sydney role to the Tokyo instance
yes, IAM roles are global you can use them in any region
USING BOOTSTRAP SCRIPTS
how to install apache and copy a file from S3 to the server when EC2 instance starts
you can write a bash script.
while creating your instance, amazon will ask you if you want to write any bootstrap script, this script will be executed as root user.
the script we are looking for is
#!/bin/bash yum install httpd -y yum update -y aws s3 cp s3://YOURBUCKETNAMEHERE/index.html /var/www/html/ service httpd start chkconfig httpd on
and we add this script hereEC2 Instance Metadatawhat is instance metadatathese are the instance information, like public ip, dns, iam and so onhow can you check this data1- use putty to access your instance2- you can use this urlhttp://169.254.169.254/latest/meta-datato access metadata with curlcurl http://169.254.169.254/latest/meta-data3- when you run this command, you will get a list of meta-data, choose whatever you like and then run a new curlcurl http://169.254.169.254/latest/meta-data/public-ipv4
AUTOSCAL GROUP LAB
what should you do before creating an auto scaling group
you should create launch configuration, which is like creating an EC2 instance.
you can do that from AWS console, EC2 page, Launch configuration section.
you have set an option which is IP Address type
which means, for the instance that will be created as part of the auto scaling group, how to assign a public IP address.
how to create an autoscale group
1- create a launch configuration
2- have your loadbalncer ready if you want to use load balancer
3- now you can start to create the autscale gorup, the first form is this
as you see you set the launch configuration that you created before
you set the group size, 3 means that the group will start with 3 instances
Subnet is very important, basically here we are telling amazno to create the instances in different availability zones, not in the same availability zone, so if one availability zone is down you are not out
then you choose your load balancer and the healthy check, this healthy check will check if the machine is down and fire up a new instance if it is down.
the health check type should be ELB if you have ELB.
4- then the second form is about configure scaling policies
which means when should we add a new instance and when should we remove
also you can keep the group always with the same size which is 3 in this case.
5- you can then set notifications, like send an email to someone when a new instance is started
6- now you press the create group button and Amazon will create 3 instances for you
what will happen now if an instance is down in the group
the health check will detect that and fire a new instance
EC2 PLACEMENT GROUP
what is an EC2 placement group?
1- a group of EC2 instances that are connected with Law latency 10GBps network
so basically you use placement group if you want to do fast processing
can your placement group spanned across multiple AZ
no, your instances must be in the same zone (single point of failure)
can you have 2 placement group with the same name
no, placement group name should be unique in your AWS account
can you add a micro instance to the placement group?
no, only powerful types are available to be added to the placement group (Compute Optimized, GPU, Memory Optimized, Storage Optimized)
can you add EC2 of different types (i.e mix them) in placement group
yes, however it is not recommended, it is better to make your instances of the same typ
can you merge 2 placemnt groups together
can you move an existing instance to placement group
NO, you can create an AMI from your instance then launch a new instance based on this AMI in the placement gorup
ELASTIC FILE SYSTEM (EFS) LAB
what is EFS
it is a new service from amazon, still under review.
with EFS you dont have to provision your storage, as you remember when we were assigning EBS storage to our instance, we were setting the size, in EFS you dont have to do that.
you can assign EFS to multiple instance, not like EBS, which means you can put your code in EFS, assign it to multiple instance and have the code in one place
EFS supports read after write consistency model like s3, however EFS is block storage no object storage
you can check the lab if you want to know more, EFS is not available to everyone now
what is lambda
it is like event driven code, when an event happens code will run,
type of events like s3 object modification, dynamodb change, kenissis things.
what is the supported programming language for lambda
how will you be charged
first million request for free
0.20$ per 1 million after that
plus the execution time rounded to 100ms, and it depends on the memory you allocate, 0.00001667 for every GB-second.
some stuff you need to know about dns
what are the IP version types and what is A and AAAA reocrds
IP4: 32 bit, store in what is called A record
IP6: 128 bit store in what is called AAAA record
what is CNAME
sometimes you have a google.com and googl.com and gogle.com all of them pointing to the same IP, to the same domain, this happens using the CNAME record
what is the top level domain
for example google.com.au, the top level domain is au then the second level is com
what is SOA record
A Start of Authority (SOA) resource record indicates which domain name server (DNS) is the best source of information for the specified domain. Every domain must have an SOA record.
SOA contains information like
what is a name server
lets say you bought a domain name form godady.com and you bought a hosting from blue.com, when you write www.bigasus.com the request will go to godady, godady will try to send the request where the web application is (i.e. blue.com), however godady doesnt know anything about blue.com.
here we have name server, name server are used to link godady with blue.com
what is TTL, and how to handle to dns migration
time to live, which is the caching time wheather on your computer or caching server
when you do the dns migration you should reduce the ttl before migrating, if you dont do that you will have some dns caches the old name
what is Amazon Alias Record
this is something just in amazon, it is similar to CNAME where you can have aliases to your resource dns (you know that s3, loadbalancer, EC2 all have a dns name)
what is a naked domain
this is your domain without www, also called zone apex sometimes
can you assign a naked domain to a CNAME
no in amazon naked domain must be assigned to A address (i.e IPv4),
we know that elb doesnt have an A address (i.e. IPV4) how to assign a naked domain to ELB>
you should use aliases
what is the difference between cname and alias record regarding cost
when you make a cname request to route 53 you will be charged, if you make an alias record request you will not
ROUTE 53 LAB,
how to create a route 53
from the AWS console, Route 53, Create Hosted Zone
now lets say that you bought this domain name xxx.com from godady.com, when you create a route 53 , the wizard will ask you for your domain name so you enter xxx.com
also it will ask you for the type, is it Public Hosted Zone or Private Hosted Zone for VPC. which basically mean do you want this route 53 for your private network internally or is it something public for the internet.
after this amazon will give you a list of name server and it will give you SOA record
you should copy these nameservers and put them in godady
then you should create a record set, in this record set you put your naked domain, and you point this domain to your load balancer
as we mentioned before that you dont have an ip address for your LB, you should create an alias for that this is the screen
and now you are ready to go
what are the routing policie
if you can see in the image above, there an option for routing policies, these routing policies are
Simple: you make a request (cloud.guru) it will go to route 53, rout53 will pass it to the EC2 or load balancer.
very simple nothing smart, this is the default option, you use it when your application is simply one server or one load balancer
Weighted: split request between ELBs or zones
you can use it for example when you have a new service, start routing users gradually
Latency: route53 will redirect you to the best option with lower latency
failover: you have an active and passive server, route 53 will keep sending to the active, if the active is down it will redirect you to the passive
Gelolocation: you will be redirected based on your location
do you have a limit to the number of domains that you can manage in Route53
yes, 50, but you can contact amazon to increase the number
does route53 support MX records for SMTP server
DATABASES ON AWS
what is DMS service
allow you to migrate your database to cloud automatically, and it allows you to convert your oracle database for example to mysql, it will allow you to convert even your stored procedure and ....
LAB RDS INSTANCE
what are the important thing that you should fill while creating an rds instance
1- DB instance class: the type of the machine where the databasee will run
2- Multi AZ Deployment: yes or no
3- you should give a db name and master username and password
4- do you want it a vpc or not
5- do you want it to be publically accessible
6- do you want it to be encrypted (some engine doesnt support that)
7- how often should amazon create a backup
8- after the creation you will have an endpoint address, you can use it for connecting
9- you should update your security group in order to be able to connect
RDS: back ups, Multi AZ and Read Replicas
what are the types of back ups for rds?
we have automated back up and database snapshots
Automated backup: you can return to a specific point in time in retention period (last week at
5:13pm), Automated back up runs automatically and is enabled by default
Snapshot: this is done manually, you can take a snapshot by yourself at anytime
how long is the retention period
this is the period where that you define so you can do point in time recovery, it is between 1 and 35 days
how does automated backup work?
amazon will take a full back up every day once, and keep the transaction log, so when you do a point in time recovery, it will load the back up and apply the transaction log
you can do point in time recovery down to a second.
where is the automated back stored?
it will be stored in S3, you will get a storage size similar to your ec2 instance size
will the performance will be affected while amazon is taken the automatic back up
yes, so schedual the back up when the traffic is not high
what will happen to your backup when you delete the rds instance?
in case of automatic back up the backups will be deleted, in case of snapshot, they will stay there
what will happen when you do backup recovery,
amazon will launch a new rds instance and put the recovery there, it will not happen on the same instance
what are the type of rds that are supported in amazon now
which database support encryption
mysql, sql server, oracle,postgre and mariadb.
encryption is not supported for auroa
can you encrypt the database after creating the rds instance
no you should encrypt before creation
what will be encrypted?
the rds, the backup, the snapshot, the read replica
how to create a snapshot
aws console, rds, instances
then from Instance action drop down menu choose Take db snapshot
where can you find your snapshots and the automated backups
aws console, rds, snapshots
how to restore snapshot
aws console, rds, snapshots and then restore snapshot
how to do a point in time?
aws console, rds, instances
"Instance Action" -> Restore to Point in Time
how to migrate a snapshot to a different database type
go to the snapshots page and choose migrate database
how to move a snapshot to a different region
you can do that by copying the snapshot to a different region from the snapshot page
how to restore a snapshot
you can do that from the snapshot page.
how can you move your rds to better performance instance
you can create a snapshot and then restore the snapshot, while restoring choose a better instance
what is Multi AZ
synchronous replica which is used for failover, it is not for improving performance
which databases support Multi-ZA
sql server, oracle, mysqly, postgresql, mariadb
what is a read replica and which databases support it and give some scenario where you can use read replicas
you can create a read replica of your database, and even you can create a read replica from a read replica, you can create up to 5 read replicas from a db instance
read replica happen asynchronously
only MySql server, PostgreSQL and MariaDb support that.
you can create a read replica from your production environment and put it in dev
you can create a read replica to run some reporting
can you have a Multi -AZ from a read replica
can you have a read replica from the Multi-AZ that exists in another region
yes, only MySql server and Mariadb support that
how to create a read replica
AWS, RDS, Instances
"Instance Action" -> "Create Read Replica"
what is the difference between rds and Dynamo db regarding scalling
scalling in RDS means that you have to create a snapshot then run a new instance with better performance
in Dynamo db this happens automatically
In Rds you can just scale out the read, you cannot scale the write
1- it is stored on SSD
2- it is stored over 3 geographical locations
what are the consistency models
eventual consistency read: (the default) data will be propogated over all dynamodb instances within a second
strong consistent read: data will be propogated directly, you will not have a situation where you read something from dynmodb instance 1 and something else from instance 2
how do you scale your dynamod db?
basically you provision the number of reads and writes per second
how the pricing works in dynamo db?
writing throughput $0.0065 per hour for 10 units (10 writes per second)
reading throughput $0.0065 per hour for 10 units
storage cost 0.25$ per gb per month
dynamodb is expensive for write, better price for read
how to create a dynamod db table
AWS console, Dynamodb, create table
what is reserved capacity in Dynamo db and how can you do it
similar to EC2, you can have a receirved capacity in DynamoDb so you can save money
from AWS console, DynamoDb, then Reserved Capacity
how to scale your dynamodb
Aws Console, Dynamod Db, Tables, Capacity
here you can change the read and write throughput
what is amazon redshift?
1- for data wherehousing solution
2- it is petabyte-scale
3- you can start by single node then add multiple node, when you have multiple node you will have a leader node which reseives client connections and query, and you will have compute nodes, you can add up to 128 compute nodes
4- redshift is stored in one availability zone
what is the structure of redshift?
it is a column based system, the columns data are stored sequentioaly in the disk which makes the I/o much faster.
also the compression will be much effictive, because it is done on the cloumn level which means you will do a compression on data of the same type (not like row compression, in a single row you might have ID(integer), Name(String), Date(date)
also when you have multiple nodes, Redshift can do Massive Parallel Processing (MPP), which returns the result of your query faster
there is no need for indexes and materlialized view in redshift
how is the pricing in Redshift
you pay for your compute node, you dont pay for your leader node
how security is happening in Redshift?
SSL encryption on transit
data encrypted using AES-256
redshift takes care of key management
what is elastic cache and types of engins
-in memory caching
-caching engins are, memcached and redis
memcached is object cache, redis is key value store
redis can have master slave architecture, and can be installed on multi AZ
five time bettern performance than mysql and cheaper
what are the AURORA features
you can start with 10gb storage and scale up to 64TB (10 gb increment each time), AND IT IS AUTO SCALLING NOT LIKE OTHER RDS. please notice that this is storage auto scalling
for computation autoscalling, moving to better cpu or memory, it is not auto and you have to do that in maintinance window because the database will stop
aurora stores 2 copies of your data in each avaiability zone, and it uses 3 availbility zones,which means you have 6 copies
aurora can handle the loss of up to 2 copies of data without affecting writing availability, and up to 3 copies without affecting read avaiability
Aurora is storage self healing, if there is disk error it will handle it by itself
what are the types of replicas in Aurora?
1- Aurora Replica: up to 15, if there is failover you can move to this replica
2- MySql read replica: the replica that we know, you can have up to 5, there is no fail over to these replicas
what is the max provisioned size you can get fro Mysql and Oracle?
when you replicate the data from your primary to secondary RDS instance, how much will it cost?
when you want to add a rule to RDS security group, do you have to provide a port number or protocol?
when you have Multiple ZA solution, can you use the secondary db as an independent read node?
what will happen to the IO operation when you take a DB snapshot
if you are single AZ RDS, IO operations will be suspended until the snapshot is taken
what is a VPC
it is a virtual data centre in the cloud, when you create an AWS account, amazon will create a default vpc for you in every region so you can work with it in an easy way
basically Virtual Private Cloud is provisioning of a network, in this network you can assign the IP range that you like, creating subnets, configuring route table and network gateways.
so you can some ips public other ips private
what is Hardware Virtual Private Network
it is hardware virtual private network, this is a way to connect your private datacenter with the VPC on the cloud and perform Hybrid cloud
what is VPC peering
you can connect multiple VPC together, the connection will be with a network not internet, the connected VPCs could be in different regions or even between different account
what does it mean that VPC peering is not Transitive
it means that VPC B and VPC C cannot talk to each other thorugh VPC A, if they want to talk they should do that by a direct peering
BUILD YOUR OWN CUSTOM VPC
how to create a VPC
AWS console, VPC, from here you can click Start VPC wizard.
from this wizard you can create a VPC, however we will try to create it manually
to do that go to AWS console, vpc, Your VPCs and click Create VPC
Name tag: is just a name for your vpc
CIDR Block: is basically the subnet that you will use, choose 10.0.0.0/16
Tenancy: you have 2 values default and dedicated
what does 10.0.0.0/16 mean
this is CIDR, the 16 is the subnet mask, it means here 16 bit which is
which means your subnet is
if you choose 8
what does tenancy with value dedicated mean
it mean that you will run your vpc in dedicated server --> when you start a new EC2 it will be in dedicated server even if you select that it should be on shared server
what will be created by when you create a VPC
a new route table will be created when you create a VPC
what should you do after the VPC creation?
you should create a new Submit
VPC page, Subnets, create subnet
you should choose your VPC, availability zone and the CIDR
VERY IMPORTANT: EACH SUBNET CAN BE ASSIGNED TO ONE AND ONLY ONE AVAILABILITY ZONE.
when you say 10.0.0.0/24 how many IPs are available for you
it means the subnet is 255.255.255.0 and the available are 256 from 0 to 255,
however the number 0 and 127 are reserved --> you have 254
you created a subnet above, and you used 10.0.1.0/24, Amazon told you that the available host ips for you is 251, why?
Amazon will reserve 3 for its use
you created 3 subnets in the VPC above, lets say that you add an instance in the subnet 1 and EC2 instance in subnet 2, can these instance communicate
yes they can communicate without changing anything
after you create your subnet, what should you do to be able to connect to internent?
you should create an internet gateway from VPC page
after you create an internet gateway you will notice that it is detachhed, you should attach it to your vpc
can you attach multiple internet gateway to a VPC
no, only one for each vpc
after you create an internet gateway, what should you do to let the internet gateway communicates with your EC2 instances
you should create a new route table and add another route.
the route should open all the connection to the gateway
as you can see 0.0.0.0/0 to open all connections, and the Target is the Gateway
then you associate it to a subnet, as you can see we created 3 subnets before we will assign one to it
, think about it like you have application tier database tier, each one on a seprate subnet, only one subnet will go to internet
you can do this from the Subnet Associations tab, you can see it in the image above
now, the associated subnet will have access to the internet which means all the EC2 in this subnet will have internet access
when you create an EC2 instance now, you can choose your VPC and your Subnet
what is the max message size?
1- message size is max 256kb
is there a first in first out gurantee?
2- there is no gurante "First in first out"
is sqs a push or pull system?
3- it is a pull system, you should pull the messages from the queue (not like SNS)
what about autoscaling
4- autoscaling is affective here, if you have many messages you can scale your ec2 or kill ec2 if there is no message
what is visibility timeout
when you read the message from the queue, amazon will not delete if from the queue until you delete it, becuase there is a chance that you go down or there is an error.
so you read and delete,
once you read the message, the message will be in VISIBILITY PERIOD which means no one can read this message, if you dont delete the message withen the visibility period amazon will let another consumer read the m
what is the max visibility timeout
5- 12 hours time out for message
when is the visibility timeout period starts?
it starts once a consumer try to read the message not when the message is stored in qc
what is the max retention period
6- 14 days retention perioud
what is the delivery scheme?
7- at least once delivery
how is the billing?
8- billing is based on 64 kb (so 246kb means 4 times billing)
1- Simple Workflow Service
what is the retention period for swf
2- Retention period is 1 year, not 14 like sqs
what is the difference between sqs and SWF
3- swf is task oriented, sqs is message oriented
4- swf task is assigned to one and only one, not like sqd at least one
what are the swf actors
Starter: initiate the workflow
Decider: decide the next step
Workers: do the tasks
what is a domain
a collection of related workflows
1- simple notification service
is it push or pull
2- here we have push, not like SQS pull
where can you push to?
3- you can push to mobile, send emails call apis
4- it is basically a topic,
what are the supported push
http, https, email, email+json, sqs, lambda and mobile
what happen when you create an SNS
Amazon will create a Amazon Resource Name
service to convert to different media format.
WHITE PAPER REVIEW
can you simply start a website which accespt credit card on AWS
no, you should be PCI DDS compliant, you should contact AWS and find out.
Security part 1
1- amazon responsible for the infrastructure, and managed services like dynamodb, redshift and ..., amazon responsilble for protecting the hardware the network ...., you are responsbile for EC2, S3, VPC, you are responsible for securing them
how amazon decommis a storage device
in some provider when you finish working on a hard disk, another customer comes and may get that disk with your data.
amazon follow a procedure mentioned in DoD 52220 document, to delete your data, also they have a process to destroy the whole disk
how network security in AWS
3- for transmission protection, amazon uses https also you can have VPC,
4- amazon.com network is not similart to aws network they are segregated
5- amazon is protected against DDos , man in the middle ip spoofing, port scanning, packet sniffing
6- you can do port scanning in AWS on your instances BUT YOU HAVE TO TELL AMAZON IN ADVANCE
7- in aws you have : Passowrds, Multi Factor authentication, Access Keys, Key Pairs, X.509 Certificates (to secure cloud front for example)
8- you have AWS Trusted Advisor service, to check your system and see if there is any issues
what is IP Spoofing
amazon will not allow you to send a request from IP or MAC other than its own, there are some software that allow you to change your mac or IP to do man in the middle, AWS will prevent this
what is instance isolation
amazon runs multiple instances on the same machine, these instances are isolated using Xen hypervisor. in addition you have a firewall between the hypervisor and the physical thing.
RAM IS ALSO FOLLOW THE SAME ARCHITECTURE
in addition you dont have access to the raw disk, you have access to a virtualized disk
MEMORY WILL NOT BE ALLOCATED TO A NEW CUSTOMER BEFORE SCRUBBING (make everything zero, like deleting everything)
does AWS have access to your operating system
no, AWS cannot access your instance
what about firewall
amazon provide a mandatory firewall with all inbound connections are closed.
do we have EBS encryption
yes but only for large EC2
can you terminate SSL on load balancer
yes, you can terminate SSL at the load balancer, so the communication between load balancer and EC2 is not encrypted
can you get the IP address at the load balancer
yes you can know the ip of the client at the load balancer
what are the types of elasticity that are mentioned in the whaite papers
1- cyclic scaling: weekly, daily, monthly
2- Event-based: based on event
3- Based on demand: one there is an increase scale out
WELL ARCHITECTED FRAMEWORK
you have 4 pillars
how to design for security
1- enable security on all layers
2- enable traceability: you should be able to trace the risk, logging
3- Automate response to security: if something happens you should have a software to check and handle the situation
4- Automate Security best security
how to handle Data Protection?
when you think about data protection you should firstly categorize your data, which means you can see this data should be available for customer, this data should be availble for sales ...
then after this you can allow people to access only the data they want
also encrypt your data whenever possible
how can you handle data protection with AWS?
1-you have full control over your data
2- you can encrypt and manage your keys
3- you can use cloud trail for logging
4- AWS storage systems are reliable
5- you can version your data
6- AMAZON WILL NEVER COPY YOUR DATA TO A NEW REGION UNLESS YOU DO THAT
what about privilige management?
we have Access control list
Role Based access
what about Infrastructure protection?
you dont have to think about this, AWS will take care of CCTV, Guards ...
what you have to think about is your VPC protection,
what about Detective Control
you can use detective control to detect security breaches,
other services that could be used are:
3- AWS Config
when we talk about reliability we are talking about the ability of the system to handle infrastructure outage and the ability to dynamically aquire computing resources
how to design for reliability?
Test Recovery procedures
Automatically recover from failure
you should know the service limits, YOU CANNOT START AS MUCH INSTANCE AS YOU WANT.
what are the services that are provided for reliability
1- to build a foundation: VPC
2- Change Management: Cloud trail
3- Failure Managemnt: AWS cloud formation
just remember that when we talk about performance we talk about
2- Storage: EBS, S3, Glacier
3- Database: RDS, Dynamodb, Redshift
4- Space time trade off: Cloud front , elastic search, direct connect, rds replica
1- Matchd supply and demand:
2- cost-effective resources: select the right resource type
3- Expenditure awareness : put alerts to check your expenses
4- optimizing this overtime: adopt new services
QUESTION FROM FAQ AMAZON WEBSITE
do you have a limit on the number of emails that you can send from your EC2
yes, if you need to remove this limit contact AWS
what is the pricing modle
you will be charged for instance per hour
you will be charged for data transfer between instances (if it is across region it will be internet data transfer)
will you be charged if the instance is stopped or terminated
what is the compute unit for EC2
you have EC2 compute unit for calculating the power of CPU
how to get a history of all api calls that happens on EC2
turn CLOUD TRAIL on for the instance.
how many elastic IPs are allowed per region
only 5, need more contact AWS
will you be charged for the reserved elastic ip if the instance is off
what are the type of IPs that you have for EC2
you have private for internal use (will be removed when you stop or terminate the instance
you have public for internet access (will be removed when you stop or terminate the instance, or when you assign an elastic IP to the instance)
how and when will you be charged for data transfer
1- whenever the other instance is in a different availability zone, you will be charged
2- whenever you use public or elastic IP address for data transfer
what is enhanced networking,
better network packet transfer, no additional cost, only few types of instances, and only VPC
you created a snapshot, it will be stored in S3, can you access it using S3 apis
No, it will be accessed only through EC2 Apis.
do you need to unmount ebs when you take a snapshot
no, but it is better to unmount it.
do you pay any money when you share your snapshot
for how long cloud watch metric data will be available after you disable it
what will happen to your instance if you delete an auto scaling group
all the instances assigned to this group will be terminated
does Load balancer support IP6
can i configure my instance to accept connection only from load balancer
can you have a security group for your load balancer
only if you are inside a VPC
can i use the same load balancer for http and https
can you get a history of requests that are comming to load balancer
yes, enable cloud trail
what is VM import/export
you can export and import your VM