AWS CERTIFIED CLOUD PRACTITIONER [CCP]

 AWS CLOUD PRACTITIONER SYLLUBUS:



CCP: https://linuxacademy.com/cp/modules/complete/id/634

SOLUTION ARCHITECT: https://linuxacademy.com/cp/modules/view/id/630

==============================================

***Classic cloud IT Roles:

1] IT Architect - designing architecture based on business goals:  ERP/application/storage/Network/security Architect


***On-PREM-System Administrator:

installing/suppoting /maintaing/monitoring nw traffix/health/log files/service outages/OS and apps/managing file systems(create/delete/backup/user access/access control of comp systems and servers


**APP ADMINISTRATOR:

install/update/intune apps, troubleshott problems, closely work with dev team, tech teams for proper integrations, manage docs/configure and review app logs:


**DB ADMINISTRATOR:

Installing and manitain DB in IT environment, train employees on DB usage,

1]Ops DBA- Mange databases, change cripts, 

2]Dev DBA- planning and designing on DB, change structure of DB, write and test DB scripts

3]Data Admin-Manage data access poloies, permission with read ,write and delete options


NETWORK ADMINISTRATOR- designing, installing , configuring, expand and maintaining LAN and WAN networks

Network related build/expansion/maintain/security policies, NFS, troubleshoot, log files etc..

dev/


STORAGE ADMIN: Storage systems- install/configure/replace.maintain/test/backup capacity/recovery/ monitor stoage capacities, host systems capacity/ work with other Application technical teams


SECIRITY ADMIN: Installing, configuring, managing, enforcing security solutions.

support tools, vulnerability..


==============================================

**********CLOUD ROLES************

4 Spheres of responsibility in AWS cloud:

1]Cloud Business management

2]Cloud Infrastructure

3]Cloud security

4]Cloud application insfrastructure

Cloud enterprise architect -->with all spheres- PM, FM, security arch, app acrh


BUSINESS==Cloud platform (business apps+Cloud center of Excellence CoE)+ org wise services -processes:


CLOUD ENTERPRISE ARCHITECT: collaborate with business to get requirement, design solutions, independent architectures, present models to business, validate refine and expand arch, manage and update architectures as necessary


PROGRAM MANAGER: Ensure cloud is managing appropriately, manage ops teams, manage & monitor cloud metrics, service reports for cloud environment.


FINANCIAL MANAGER: manage financial controls for cloud, cloud resources, cost coding, distribution cost, know cost usage, optimize cloud costs..


CLOUD INSFRA ARCHITECT: desing solution -dependent cloud infra acrchitectures.

develop and manage plans- collaborate with other teams.


CLOUD OPS ENGINEER- build, monitor, managing cloud infrastucture and shared services..


CLOUD OPS ENGINEER- OS management, patch, release updates, manage templates, doc changes, new services, manage capacity of apps, manage network connectivity, resilency management


performance tuning, escalate incident, RCA, documentation, backup, DR testing, compliance programs..


CLOUD SECURITY ARCHITECT: security requirements for apps, collaberation with other apps teams, maintain security checklists, risk assessment plans, corporate sec policies etc..


CLOUD SECURITY OPS ENGINEER- manage and monitoring enforcing securiry, vulnerable security assessments, configure sec configurations and groups, manage access management, create security assessment reports..


CLOUD APPLICATION ARCHITECT: create tech design working with business, infra, ERP team to build, perform capacity and scaling requirements, deep sw knowledge, AWS practices...


CLOUD APPLICATION DEVELOPER- app development, manage app changes, code release , code deployment, application support and mange application documentation.


DEVOPS ENGINEER: buiding and operate fast and scalable workflows.. colloberate with dev and pms.. design build automation solutions, pipeline, automate all CI CD - review and recommend improvements , each app automation test, maintain change management processes..

==============================================

MANUAL & AUTOMATED MANAGEMENT

AWS CloudFormation tool uses templates used to deploy infrastructure as a Code.

Manually manage your environment & Automate using AWS API's CLI, mgmt console, infrastructure provides a reusable, maintainable, extensible and testable infrastructure..Infrastructure as CODE


CloudFormation-is AWS service that helps you model and setup services resources & turns your infrastructue as a CODE.

Services--> Managemnet & Governamce -->CloudFormation-->Create Stack -->

sample template --> view in Designer -->

Select stack-->Delete -->Delete stack ..to delete stack.


Services-->Compute-->Lambda-->Create Function 


**IAC -Infrastructure as CODE:

versioning, CI , codify designs, rapidly iterate on designs, integration and delivery, easy to manage , security best practices, quality code, agility, more effecient, 


**DEVOPS:

Design solution-architecture- Elasticity will Scale Out, Scale in respources based on demand..

Design, write code, build/compile code, tets code, package app

Design, build/manage, 

==============================================

******SECURING AWS CLOUD********

Benefits of AWS cloud:

1]Elasticity, (automatcally change capacity,workload with automated process)

2]Increase speed and agility(360 agility, based on minutes,for all sized companies)

3]Deploy globally in minutes(Onpremise, cloud deploy from anywhere)

4]Pay as you go (no up cost, pay and use based on min/seconds usage)

5]Secure..(secure workloads, monitor, auditors,)

configure -onpremises, 

security starts at account level and as below:

AWS Account/Account ID

Amazon EC2 instance 

Amazon S3 bucket (secured simple storage services)

Amazon IAM User (Identity access managemnt user)


****AWS Shared responsibility model:

devided into 3 parts (Responsibility varies based on the services used)

1]Customer Data -customers, platform, apps, accss mgmt, OS, network, firewall config. clinet & server side data encryption, traffic protection

2]AWS Foundation services-- Compute, storage, database, networking

3]AWS Global infrastructure-- Availability Zones, Regions, Edge Locations


*AWS-SECURITY DESIGN PRINCIPLES:

This contains 7 important characteristics for secured systems

1]Assigning least privilege--> Grant access as needed, enforce seperation of duties(user,manager), avoid long term access credentials

2]Enable Traceability- Monitor actions and changes, leverage logs & metrics, Audit cloud resources.

3]Secure all layers- take a defense in depth approach, use different AWS services

4]Automate security- Automate seciurity routine tass with APIs,Turn infrascture into code (QA,Dev, UAT etc)

5]Protect data in transit and at rest (encryption/access controls, data classification with tagging, Leverage VPN and TLS connections)

6]Prepare for security events(Mitigate impact of security incidents, create processed to isolate incident and restore operations)

7]Minimizing attack surface (be ready to scalre and absorb the attack, safegaurd exposed resources, 

load balancer, security shield will help on this on servers)


Security Postures: 6 Elements -how to be better -

1]Authentication: who are you? admin/manager/user?

Control access for users/groups/roles with Identity access mgmt [IAM]

Secret and access keys when using the AWS CLI or AWS SDKs or making direct API calls to make sure for right access levels from active directory systems integratons(AD)/access keys


2]AUTHORIZATION: what can you do or not?

Authorization-allowd/denied actions

User makes a request-check credentials

Authorization--check for policies

Actions or operations(CLI,API)- create bucket

Check resources from S3 bucket

Effect - Allow or Deny on actions or operations.


3]MONITORING -how much did you do? [AWS CloudWatch]

how many users logged in EC2, logging, auditing systems

AWS Cloud watch- events, CPU utilization, load balancers and collect logs from services and applications

Events can be used to resposd to ops changes and take corrective actions

Alarms-can be used to send notifications and automatically make changes


4] AUDIT- what did you actually do? [AWS CloudTrial]

What did happened, AWS logging with cloud Trial-to track who, when what happened 

every S3, EC2, DB, RDS-relational db services -will be tracked all the events 

Services continuosly publish API calls & CloudTrial-continuously delivers log files


5]Encryption - is your data encrypted at rest and in transit?

Where are the keys stored?

Where are the keys used?

who manages the keys?

CLIENT SIDE-->You encrypt your data before sending to AWS

SERVER SIDE-->AWS encrypts data on your behalf after the service receives data


***KMS-Key management service [Protection at rest]

AWS Key mgmt service: manage data encryption for other AWS services

Encrypt data locally withing your apps

Determine who can use keys with key policies

Integrated with AWS CloudTrial for built-in audinting

Auth nw communications with TLS or IPSec

Manage SSL/TLS certificates by using AWS certificate manager

 (native esrvice will automatically run and validate certs)

Enforce encryption in transit by only allowing HTTPS traffic over your network


6]DATA PATH: what network controls do you have? VPC

content protection and network isolation

AWS VPC- virtual private cloud: Subnet routing, network ACL's security groups [Ips, protocols, EC2 security groups]

Provides logically isolated section of the AWS/

public and private subnets for isolation

VPN connectivity for hybrid solutions

Provides multiple layers of defense



==============================================

***IAM Credential report -- 

You can generate and download a Credential Report that lists all users in your account with following details to audit user permissions.

PASSWORDS- when pwd enabled/used/last changed/next pwd change time

ACCESS KEYS-Whether Key is active/last used/last rotated/last used on 

MFA- whther MFA enabled?

Services--> IAM - Credential Report link

 

***PLACEMENT GROUPS-EC2

Dedicated instances -placement groups

Availability Zones -Regions - data centers


==============================================

*** SIMPLE STORAGE SERVICE (AMAZON S3)****

Object storage system -allow to upload files to webserver, files split into smaller bits of data

Data replicated across hosts in distributed architecture

EC2- Elastic compute cloud

EBS-Elastic block store

Mimics formatting of physical harddrive [File system,Disk IO,partitioning,Throughput]

**S3--> flat file, object-based storage service [not DB files]

Files with 0 bytes to 5TB maximum to stored in buckets(folder)

S3 bucket name is a universal namespace-should be unique globally

on success upload you will get success 200 response code

S3 is a key-value pair storage

S3 has Version ID for versioning

Metadata

Subresources [ACL,Torrents]

***COMMON USE OF AWS S3-[Simple Storage Service]

Data Analytics - determine /analyse txn data -algorithms to identify patterns, remove data without worry of architecture, backups, disaster recovery, replication, default encryption at rest, high availability, reliabilitym lots of throughput, static website hosting, integrate javascript libraries , offers 3rd party sign in, continuous deployment workflow in real time etc..

Read file after write (PUT)

Delete(DELETE) file sometimes will take sometime to take effect/propogate-it takes 1 second to update..


S3 Features:

Availability, Encryption, secure with ACL(file level) and bucket policies(bucket level), Life cycle management, versioning, Tiered storage available, 


****S3 Storage Classes: 6 types

1]S3-Standard (For frequent accessed data)

2]S3-IA (Long lived-Infrequent accessed data)

3]S3 One Zone-IA (Long lived-Infrequent accessed,non critical data)[lower cost]

4]S3-Intelligent Tiering -AI/ML/storage class(Long lived with changing or unknown/unpredictable access patterns)

5]S3 Glacier-(Long term data archiving with retrieval times from min-hours)secured durable data 

6]S3 Glacier Deep Archive(Long term data acrchiving with retrieval times within 12 hrs acceptable)

**S3 Charging is based on Storage/requests/storage management pricing/transfer Acceleration/cross region replication


**Transfer Acceleration - using CloudFront/Edge Locations access will be easy using amazon S3 over optimized nw path. 

Cross Region Replication -primary bucket data can be uploaded into secondary bucket to use from other location in case of disaster recovery happens. 

S3 bucket name should be unique.

S3 -staic website only hosted, and dynamic websites will not be hosted on S3



***Restrict access setup methods on S3 bukcet

1] Bucket policy - bucket level permission

2] Object policies - individual file level access

3] IAM Policies for users/groups which can control S3 bucket based on policy.


**S3 Versioning:

Select file from bucket and edit in BBEdit-text editor and update with changes.

S3--> select bucket --> properties tab -Bucket versioning --click Edit and enable versioning. 

Now --select object/file --> select latest file and upload..

verify the file version in bucket -it will be reflect with latest version and active, old file version will be "null" by default.

You can delete old version file permenetly by selecting delete version..

1]S3 Stores all versions of file/object

2]S3 great backup tool

3] Versioning cannot be disabled.

4] Integrates with lifecycle rules.[can move S3 types]

5] versioning has MFA delete capability to apply additional layer before delete object.


***STORAGE-->S3 Create S3 bucket

Storage -->S3 --> S3 is global service not required regioin

Bucket name should be unique.. 

Bydefault bucket can't access by public.. -->Block All public access

Versioning for bucket content..

Tags- enter tags like HR, Payroll etc..

Default Encryption server side- disable/enable..

Create Bucket-- Click to enter into new Bucket..

ARN- Amazon Resource Name.


SLA-service level agreement:

Commitment to maintain service availability based on storage class

S3 Standard, S3 standard -IA, One Zone, Glacier, Glacier Deep archive etc


Eventual Consistency:

S3 is an extremely large distributed object storage system

S3 guarantees eventual consistency, NOT immediate consistency

==============================================

*************AWS CLOUD PRACTITIONER ESSENTIALS************

Amazon- EC2 [Elastic Compute Cloud]- virtual server 

You only pay what you Use


*CLOUD COMPUTING:

On demand delivery; servers, storage ask when you need and pay accordingly

remove servers/storage when not necessary..

Cloud -complete cloud environment based building and deploying resourcess

On Prem- private cloud-resources are deployed on premises using virtualization tools/

Hybrid-cloud based resources are connected to on-prem infrastructure.


*ADVANTAGES/BENEFITS OF CLOUD COMPUTING:

1. Trade upfront expense for variable expense[pay only as you use]

2. Stop spending money to run and maintain data centers

3. Elasticity/Stop guessing about capacity [cloud can scale with business needs/auto scale]

4. Benefit from massive economies of scale 

5. Increased speed and agility

6. Go global in minutes.[easily deploy app in multiple regions globally]

7. IT assets as provisional resources

Global. available, scaleable capacity[(Scale Up=increasing RAM etc; Scale Out= Stateless Applications(Alexa with Lambda); Distribute load to multiple nodes; Stateless components(websites,cookie)stateful components(actions after login);Implement session activity(sticky cookie);Distributed processing, Elastic map reduce(allow EC2 instances and process complex data)], 

High level managed services

Built in security, Operations on AWS, Architecting for cost, serverless architecture..

**Instanciating Compute resources 

(Bootstrapping);

Golen Images()

Containers ()

Hybrid (Containers +EC2)


**3 TYPES OF CLOUD COMPUTING:

1]Infrastructure as a Service [IAAS]/AWS -manage server eg-EC2/CloudFormation,

2]Platform as a Service [PAAS] Others manage OS/HW/Security/patching, you focus on your Apps dvmt. Eg.GoDaddy, 

3]Software as a Service [SAAS]You manage Inbox and google take care of SW,server,DC,network,storage,pathcing etc Eg.Gmail


**AUTOMATION:

a)Serverless management and deployment (Code deploy/code pipeline)

b)Infrastructure Mmgmt & Deployment (AWS Elastic Beanstalk(EBS), EC2 Auto recovery, AWS Systems manager, Auto scaling)

c) Alarms & Events (CloudWatch alarms, events, Lambda security automations, WAF security automation)


***AWS SYSTEMS MANAGER:

AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager.


AWS Services--> Management & Governance --> Systems Manager --> 

Run Command/patch manager etc to use deploy patch/updates on EC2 instances ..

-Used to manage fleets of EC2 instances /virtual machines

-A pience of sw is installed on each VM.

-Can be both inside AWS and On premise.

-Run Command is used to install, patch, uninstall software..

-Integrates with cloud watch to give you a dashboard of your entire estate.


**LOOSE COUPLING: (API gateway, service discovery, Asynchronus Integration with queue services)


**SERVICES NOT SERVERS

Managed services-

Servelss Architectures-


***3 TYPES OF CLOUD COMPUTING DEPLOYMENTS:

1] Public Cloud-AWS,Azure,GCP

2] Hybrid -public,private mix-

3] Private Cloud/On Premise)-You manage it in your DC, Openstack or VMware


***PARTS TO LEARN:

COMPUTE- EC2, Lambda

DATABASES- RDS, DynamoDB

STORAGE- S3, Glacier

NETWORK- VPC,ROUTE53



MULTITENANCY- Sharing underlying hardware between virtul machines


***AMAZON EC2 INSTANCES

EC2 instances are secure with each other

EC2- gives flexibility and control while configuring like selection of OS/software (internal business appr/sinmple -complex apps, Db etc)

EC2- instances are resizable as per need [vertically scaling instances]

EC2- control networking access [public or private accesss]

EC2- allows programmers invent quickly 

Caas-Compute as a service model



***AMAZON EC2 INSTANCE TYPES:

Instance Family -different instance types


1.GENERAL PURPOSE- balanced resources,compute,memory,n/w resources,web servers, code repository 

2.COMPUTE OPTIMIZED-compute incentive tasks, gaming servers, High performance computing(HPC),scietific

3.MEMORY OPTIMIZED-memory intensive tasks, high performance DB,

4.ACCELERATED COMPUTING-Floating point calculations,grpahics,data pattern, hardware accelerators 

5.STORAGE OPTIMIZED- high performance for locally stored datasets, DWH storage

*IOPS-input output per second is a metric that measures the performance of a storage device.


***AMAZON EC2 PRICING**

1.ON-DEMAND- only pay when instance run for; no long term and upfront payment, no prior contracts, average usage, low cost and flexible, pay per hour, ideal when workloads cannot be interuppted..

Pay fixed rate by hour/sec with no commitment


2.AMAZON EC2 SAVINGS PLANS- low prices for EC2 usage, 72% savings, Lmbda usage,

3.RESERVED INSTANCES- study state workloads,predictable usage, 1-3 yr term with 3 terms payment, upto 75% off, best for long term value, hourly charged, Can resell unused reserved instances(RI Marketplace) 

**RI Payment options are 3

a) All Upfront Payment- pay before you use services

b) Partial Upfront Payment- pay partial before you use 

c) No Upfront Payment-Pay after using services

**RI Class Offerings:

a)Standard- upto 75% reduced pricing

b)Convertable-upto 54% reduced pricing, allows to change RI attributes

c)Scheduled-for specific time periods (once a week, few hours etc)

 

4.SPOT INSTANCES-pay computing capacity, batch workloads, can reduce 90% (biggest savings)of On-demand costs, no contract required.

Bid for price and use -low dependant apps.

Can handle interruptions, non critical jobs,

If AMWS terminates no charge, If you terminates-then still charged.

Flexible start and end times



5.DEDICATED HOSTS-physical host dedicated ones.

Dedicated servers

Can be on-demand or reserved(upto 70% off)

physical EC2 server for dedicated use for existing servers.reduce costs



**SCALING AMAZON EC2****

Scalability: 

Elasticity-

Amazon EC2 auto scaling: 

a]Dynamic scaling- responds to changing demand

b]Predictive scaling- automatically schedules the righ tnumber of EC2 instances  based on predicted demand. 

a)Scale UP 

b)Scale OUT when no demand

Decopling of the systems to solve the problems when they not needed.

[Stateless Apps, Distribute load with multi nodes, stateless components, implement session Affinity, distributed processing]


**Autoscale bydefault-without config: [S3, EFS, Lambda]

Amazon S3 and Amazon EFS are storage services that scale automatically in storage capacity without any intervention to meet increased demand.

Also, AWS Lambda dynamically scales function execution in response to increased traffic


**DIRECTING TRAFFIC WITH ELASTIC LOAD BALANCING:

*Elastic Load Balancing[ELB] -is the AWS service that automatically distributes the incoming traffic across multiple resources/EC2 instances.

Regional level runs -automatically scalable service- no change on hourly cost; 

stops when low traffic exist- auto scaling engine stops additional EC2 instances.


*Load Balancer(application) -acts as single point of contact for all incoming traffic to your auto scaling group. 

install,manage,update, durable, to distribute traffic, cost eeficient, high 


Elastic Load Balancer-3 Types 

1)Application Load Balancer (HTTP/HTTPs)-advanced routing/visibility features. LAYER7(make intelligenet decisions)

2)Network Load Balancer(TCP)-for exterme/ultra-high performance to handle millions of requests/static IP addresses.

3)Classic Load balancers(Old Generation ones (HTTP,HTTPS,TCP)-Previous generation used when existing running EC2 classic network. Test&Dev, keep costs low.


*Creating ELB:

1)Create App ELB--> enter name, scheme, Listeners-->selct multiple AZ selection-->next --

2) Configure security groups

3) Config Routing - target group, health checks path '/' -index.html

adv health checks - interval time, success codes, healthy thresholds, 

5)Register targets-- add to registered to target

6) Review and submit -- 6-10 min will take to setup ELB


Create New EC2 -->Advanced details -->BOOTSCRAP SCRIPT -->

 #!/bin/bash [root level]

yum update -y

yum install httpd -y

service httpd start

chkconfig on

cd/var/www.html

echo"<html><body><ht1>


***MESSAGING & QUEUEING*****


1] MONOLITHIC APPLICATION -where all components are tightly coupled, if 1 component fails, other components unable to work

Tightly coupled architecture:


2] MICROSERVICES- where all components are loosely coupled, where if 1 component fails, then remaining can able to work independently.

Loosely coupled architecture:

a) Amazon Simple Notification Service [SNS]

publish/subscribe service (SNS Topic)

publisher publises messages to subscribers(web servers,email addresses, AWS Lambda functions or several other otpions)


b) Amazon Simple Queue Service[SQS]/Endpoints [ex-Rabbit MQ]

Using SQS, you can send,store,receive messages between software components without loosing messages or requiring services to be available. 

-Can retain msgs for upto 14 days

-can send them in sequential or parallel

-Can ensure only one message sent

-can ensure messages are delivered at least once.


Amazon Simple Queue Service (Amazon SQS) is a service that enables you to send, store, and receive messages between software components through a queue.

Application --> sends messagess into Queue --> user/service retrieves messages from Queue--> Processes and deletes from Queue..

Data contain in message is PAYLOAD:


12] ADDITONAL COMPUTE SERVICES: 

EC2 instances are virtual machines. 


*** WHEN TO USE EC2 COMPUTING & SERVERLESS COMPUTING??

For host traditional apps, Full access to underlying OS(Windows/Linux),-then use Amazon EC2

For host short running functions, service-oriented apps, event driver apps, non managing servers -then go for serverless computing.


1)SERVERLESS COMPUTING:

You can not see/access underlying infrastructure-serverless;

Serverless means code runs on servers, but you need not to manage these servers. which will enable you to focus on innovating new prod and features instaeead of maintaining servers. This can adjust applications capacity by modifying units o consuptions like throughtput and memory.

An AWS service for serverless computing is AWS Lambda.



2)AWS LAMBDA (serverless computing service)

AWS Lambda is a service that lets you run code without needing provision or manage servers. Ex:Resizing image service will trigger when you upload an image.

Lambda is desinged to run code < 15 minutes to complete.


3)CONTAINERS SERVICES: 

Containers provide you with standard way to package your application code and dependancies into a single object. These are secure, relaible and scaleble.

a) AMAZON ELASTIC CONTAINER SERVICE [ECS]

Container archestration tools [Docker]-package for code which need to run;

ECS supports for Docker containers. Docker is sw platform which enables you to build, test, and deploy apps quickly. You can use API calls to launch and stop Docker enabled applications.


b) AMAZON ELASTIC KUBERNETES SERVICE [EKS]

Kubernates is open-source sw enables to deploy and manage containerized applications at scale. This will allow easily apply these updates to your apps managed by EKS.


4)AWS FARGATE:(serverless computing service) for Microservices

Fargate is a serverless compute engine/platform for containers which works with ECS aand EKS. This will manage server infrastructure for you. 

AWS Fargate is a serverless compute engine for containers.


***MODULE3-AWS GLOBAL INFRASTrUCTURE:

AWS Data Centers at global locatios for High availability and fault tolerance.

Large groups -REGIONS based on business trafic demand


REGION --> Connected to other REGION with high speed network (using AWS Direct Connect)

Architecture of REGION--> Isolated from other region- without granting permissions.

GOVT compliance requirements - will be satisfied by local DataCenters.

****4 business factors to choose right Region/data centers

1)Data Sovereignity Laws/Compliance(regulatory controls)

2)Latency/Proximity/Timezone to end users(how close you are with customer base)

3)Feature/AWS service availability (AWS Bracket(h/w); select based on available of features)

4)Pricing (few locations are more expensive to operate, ex; in Brazil more taxes so cost is high, determined by many factor hence AWS is transperant in pricing)


**AVAILABILITY ZONES** AZ

A Region consists of two or more Availability Zones.

If you want to run business with multiple locations -then 


REGION with Mutliple AZ- one AZ with multiple Datacenters

Power/NW and connectivity---for business continuation..

AZ is a single data center or group of data centers within a Region.

AZs are located tens of miles apart from each other.


*****AWS Account-subscription/support package types

1]Basic-FREE (Email support only for billing and acc)

2]Developer-29$/month- Tech support center by email-24hrs, no 3rd party support; 7 trusted advisor checks

3]Business-100$/month Full access to Trusted Advisor for help on infra, API support [One hour support on production system failure]

4]Enterprise-15000$/month + TAM-Technical Account Manager- guidance on plan/develop/run AWS solutions, Support Concierge-Billing and account analysis-assistance

15 min response to Business critical support cases;

All trusted advisor checks:


Acc & billing support

Service limit increase 

Tech support 


***AWS Marketplace: is a curated digital catelogue with 1000s of sw listings from vendors. Free to use or can have associated charge which bill comes as part of AWS bill.


**AMAZON CLOUDFRONT: Services-->Networking- CloudFront

**TTL-Time to Live of cache default TTL is 86400sec/24Hrs.We can set more than default TTL also.

Content Delivery Networks CDN is called "Amazon CloudFront" is a service delivers with low latency and high speed and it is a A global content delivery network(CDN)service


1]Web Distribution- static/dynamic content distribution

2]RTMP -used for media streaming 


Create CloudFront- select distribution --> select bucket access restriction --> protocol policy --> setup TTL @edge locations --> will create domain name.. XXXX.cloudfront.net


**EDGE LOCATIONS***

An Edge Location is a site that Amazon CloudFront uses to store cached copies of your content closer to your customers for faster delivery.

. Edge locations are small data centers that host the web (static and dynamic) contents.

*Edge locations are seperate for each Region/AZ

*Origin-this is origin of all files that CDN will distribute, this can be either S3 bucket/EC2 instance, ELB or Route 53.

*Distribution- This is the name given the CDN which consists of a collection of edge locations.

*WEb Distribution/RMTP distribution.

* Edge locations are not only read-only & we can write files also.

*Objects are ached for TTL(Time to Live) based on setup in seconds.

* Disable distribution list before delete the record.

*We can clear Cashe , but it has specific charges too.

 

Placing a cache copy close to the customer will help for easy access the data instaed of accessing from distant Datacenter. Content Delivery Networks CDN is called "Amazon CloudFront" is a service delivers with low latency and high speed and it is a A global content delivery network(CDN)service

An origin is the server from which CloudFront gets your files. Examples of CloudFront origins include Amazon Simple Storage Service (Amazon S3) buckets and web servers


accelerate the communication and content delivery using EDGE CLOACTIONS

DNS- domain named service ==AMAZON ROUTE 53;

AWS Amazon Rout 53- outside DC.

AWS outposts: inside DC- AWS Outposts is a service that enables you to run infrastructure in a hybrid cloud approach


Regions are geographically isolated areas.

Regions contain 1 or more availability zones.

Edge locations run using amzon cloudfront, Elasti Cache.


**HOW TO PROVISION/ACCESS/INVOKE AWS RESOURCES-PART 1**

AWS Mgmt Console, CLI, SDK, Elastic Beanstalk and CloudFormation- tools to access AWS resources, AWS -everything is a API call:

1) AWS Management Console:-Web based interface for accessing and manaing AWS services. You can access by name, keyword or acronym. console includes wizards, automated workflows which simply process of completed tasks.


2)AWS Command Line Interface (CLI):-To save time when making API requests, you can use CLI which enables you control multiple AWS services directly from the command line tool. CLI is available for MAC, Windows and Linux users.


3)AWS SOFTWARE DEVMT KITS (SDK):-

SDK's make easier for you to use AWS services through API designed for your programming language or platform. To use SDK', AWS provides documentation/sample code for each supported language of C++, Java, .NET and more.


**HOW TO PROVISION/ACCESS/INVOKE AWS RESOURCES-PART 2**

4)AWS Elastic Beanstalk: (AWS Manage Tool)

With this, you provide code and config settings-then Elastic beanstalk will deploys the apps/resources necessary to perform below tasks

Adjust Capacity,Load Balancing,Automatic Scaling,Application health monitoring etc: Programmable.


EBS-powerful, way of deploying applications into AWS cloud. 

LAB-->Services-->Compute -->Elastic beanstalk -->Get started 

Enter app name--> choose platform (PHP)-->sample applicaiton -->Create application  -->Success message once app is up and running..



5)AWS CloudFormation:(AWS Manage Tool/Infrastructure as a code Tool) 

supports Storage, Analytics, AI, ML;A global content delivery service

With this, you can treat infrastructure as code, to build an environment by writing lines of code instead of using AWS mgmt console.

ClodForamtion provisions your resources in a safe without having to perform manual actions or write custom scripts. 


=====================================================

**AMAZON EC2-PRODUCT: 

Elastic Compute Cloud -EC2 instance is a virtual server in cloud, which will reduces time required to boot new server instance to minutes.

Pay when they are running. selecton of HW/SW , global hosting, 

Sign a contract for 3 yrs to provide service etc.. 

**PRICING:


***EC2 instance types [Remember as "FIGHT DR PXZ AU"]

F-for FPGA

I for IOPS

G for Graphics 

H-High Disk Throughput

T- Think T2 micro-cheap purpose

D - for Denisty

R for RAM

M for General purpose

C for Compute

P for Graphics

X Extreme memory

Z for memory CPU

A Arm based workloads

U Base Metal



*****BUILD/CONFIGURING EC2 DEMO :

1.Login AWS console

2.choose region

3.selct EC2 wizard

4.Select AMI(SW) Amazon machine image

5.select instance type (HW) t2 micro

6.configure network, [default nw,subnet,number of instances

7.Configure storage [root volument size, add volume, add TAGS as ex name, ec2demo..(security group= SSH, HTTP and name of security group- review and launch ..

8.Configure key pairs- ec2-demo and launch instance-Success message..

9. Launch and connect - cryptic identifier..logs -instancestatte is Running

select instance -copy public IP 

launch instance putty - and past DNS and click open

configure private key -SSH , AUth -browse for private key..

putty need ppk file ...

puttygen- click load - go to path and select pem file save the private key-saves as ppk file.

then go to putty -select private key and check the connection..


EC2 INSTANCE CREATION STEPS:

1.CHOOSE AMI 2.CHOOSE INSTANCE TYPE 3.CONFIGURE INSTANCE 4.ADD STORAGE 5.ADD TAGS 6.CONFIGURE SECURITY GROUP 7.REVIEW


**MODULE 4-AWS NETWORKING/CONNETIVITY USING VPC****

Amazon VPC -Virtual Private Cloud - isolated section to access resources 

A networking service that you can use to establish boundaries around your AWS resources is Amazon Virtual Private Cloud (Amazon VPC).


Public facing resources- with internet -public subnet

Public Traffic --> Connects VPC using "Internet gateway"

--> ELB(Elastic load balancer) --> EC2 Instancess --> Database


Private facing resources-without internet -private subnet 

To access private resources in a VPC, you can use a virtual private gateway. 

Private Traffic --> Connects VPC using "Virtual Private gateway"

--> ELB(Elastic load balancer) --> EC2 Instancess --> Database

Encrypted VPN connection to connect private VPC .internal corporate connection to connect private network gateway.


**AWS Direct Connection

AWS Direct Connect is a service that enables you to establish a dedicated private connection between your data center and a VPC.  

Data Center --> Direct Connect Traffic --> AWS Direct Connect-->

ELB--> EC2 Instance --> Database


**"Amazon Connect" is a cloud-based contact center service that makes it easy for businesses to deliver customer service at low cost.


**SUBNETS and NETWORK ACCESS CCONTROL LISTS(NACL)**

A network access control list (ACL) is a virtual firewall that controls inbound and outbound traffic at the subnet level.

Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound. 

By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules.


Your VPC-Virtual Private Cloud 

IGW- internet GateWay


NETWORK HARDENING:

PUBLIC SUBNET- access to Internet gateway

PRIVATE SUBNET -


AWS Account--> Region --> VPC(using internet Gateway)

--> AZ --> ublic/private Subnets(using NACL)

 -->Security Group --> EC2 Instances/ DBs


A subnet is a section of a VPC in which you can group resources based on security or operational needs. 

Subnets can be public or private subnet.

Public subnets contain resources that need to be accessible by the public, such as an online store’s website

Private subnets contain resources that should be accessible only through your private network, such as a database that contains customers’ personal information


PACKET: A packet is a unit of data sent over the internet or a network.  


***VPN  Virtual Private Network- establish a secure and private tunnel from your network/device to AWS global Network: 

AWS Site-to-Site VPN-secure connect on-premises nw or branch office site to VPC

AWS Client VPN== securely connect to AWS or on-premise network.


***SECURITY GROUP:

A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.

Security groups perform stateful packet filtering/access verification. They remember previous decisions made for incoming packets.


***GLOBAL NETWORKING***

Edge location, CloudFront for CDM,

AMAZON ROUTE53: AWS Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end users to internet applications that host in AWS.

Global service similar to IAM.

Amazon Route 53 connects user requests to infrastructure with ability to manage the DNS records for domain names.

Another feature of Route 53 is the ability to manage the DNS records for domain names. You can transfer DNS records for existing domain names managed by other domain registrars. You can also register new domain names directly in Route 53


DNS-Domain Name System (DNS) resolution which involves a DNS server communicating with a web server. Translate IP address to 


Latency based routing

Geolocation DNS

Geoproxy routinh

Weighted round robin


to register domain names- 

Amazon CloudFront -- at EdgeLocations - CDN(Content Delivery Network)

===================================================================

MODULE 5- AWS STORAGE & DATABASES****

1)Instance Stores [Temp data storage]

2)Amazon EBS-Elastic Block Store [permanent data storage]


CPU/Memory/network -- Files in bytes store in disc:

DB/File system/ Enterprise storage


1)An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in the instance store.


2) Amazon Elastic Block Store (Amazon EBS/virtual Disc) is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available.


To create an EBS volume, you define the configuration (such as volume size and type) and provision it and it can attach to an Amazon EC2 instance.

Because EBS volumes are for data that needs to persist, it’s important to back up the data. You can take incremental backups of EBS volumes by creating Amazon EBS snapshots.


An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved.

To make sure EBS volumes safe- 1) Create EBS backups & 2)Enusre EBS data is encrypted at REST

 

****EBS comes in 2 types SSD & MAGNETIC:

*1) SSD- 

a)General purpose SSD(GP2)

b) Provisional IOPS SSD(IO1)-highest performance Input Output Per Second

*2) MAGNETIC: previous generation one

a)THROUGHPUT OPTIMIZED hdd(ST1)

b)COLD HDD(SC1)-low cost HDD


**COMMAND LINE INTERFACE -CLI-->

IAM--> Create user Admin user..select user --> security credential tab-->

Inactive -Delete keyID-->confirm

Create Access Key-- download access key ID

EC2 instance --> select EC2 instance and Connect -->  


aws s3 mb s3://acloudbucket2019 --//creating bucket

aws configure --//create credentials

enter Access Key ID amd Key 

us-east-1 //region name

aws s3 mb s3://testbucket --//create bucket

aws s3 ls  //list of bucket

echo "hello" > hello.txt

la

aws s3 cp hello.txt s3://testbucket 

upload: ./hello.txt to s3://testbucket-ccp/hello..txt



**** AMAZON SIMPLE STORAGE SERVICE (S3):

**OBJECT STORAGE: In object storage, each object consists of data, metadata, and a key.An object’s key is its unique identifier.

Amazon Simple Storage Service (Amazon S3) is a service that provides object-level storage. Amazon S3 stores data as objects in buckets and S3 offers unlimited storage space with max of 5TB.

S3 versioning feature to track changes to your objects over time and you can set permissions to control visibility and access to it.

versions of object, multiple buckets , create permissions, stage data bw tears

older data storage, 99.99 durability, sustain data, recides data in multiple locations, 


S3 can not install on OS or database on it, for this requirement you need to go for EBS, EFS services.

EBS-Elastic block storage-virtual hard disc to use with EC2 instances, automatically replciated within 1 AZ

EFS-Elastic File system- file storage service, If you install on DB and increase size, then EFS can expand size accordingly. Easy to use with simple Interface..

EFS is elastic (auto resize) where EBS does not.


****AMAZON S3 storage classes

With Amazon S3, you pay only for what you use. Amazon S3 storage class, consider these two factors:

How often you plan to retrieve your data

How available you need your data to be

1)S3 standard-S3 Standard provides high availability for objects. This makes it a good choice for a wide range of use cases, such as websites, content distribution, and data analytics.stores data in 3 AZs.


2)S3 Standard-IA-this is ideal for data infrequently accessed but requires high availability when needed.Similar to S3 Standard but has a lower storage price and higher retrieval price and stored in 3 AZs


3)S3 One Zone- this stores data in a single Availability Zone and 

Has a lower storage price than S3 Standard-IA. This makes it a good storage class to consider saving COST, easily reproduce data in event of AZ failures.


4) S3- One Zone-IA-Requires a small monthly monitoring and automation fee per object

Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard


5)Glacier-S3 Glacier is a low-cost storage class that is ideal for data archiving and able to retrieve objects within a few minutes to hours


5)Glacier Deep Archive-Lowest-cost object storage class ideal for archiving

 S3 Glacier Deep Archive, consider how quickly you need to retrieve archived objects. You can retrieve objects stored in the S3 Glacier storage class within a few minutes to a few hours.


S3 Glacier-to archive the data-compliance policy [WORM -write or read memory]

S3 life cycle policy- 


***S3 Storage BENEFITS - EBS, Web Enabled, Regionally Distributed, Cost savings, serverless(no EC2 instances needed),

Object storage - files as objects (doc/image/videos) -when changes whole thing uploaded again.

Block Storage - stores in blocks and updates incremental changes.

WROM-write read only memory???



****AMAZON ELASTIC FILE SYSTEM***

**FILE STORAGE: In file storage, multiple clients(users, applications, servers) can access data that is stored in shared file folders, here storage server uses block storage with a local file system to organize files. Clients access data through file paths.


Compared to block storage and object storage, file storage is ideal for use cases in which a large number of services and resources need to access the same data at the same time.


Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud services and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks automatically which can scale on demand to petabytes without disrupting applications. 


Elastic Block Storage Vs Elastic File System [EBS-EFS]

EBS- stores data in single AZ, to attach with EC2 instance to an EBS volume where both should be in same AZ.

EFS-It is regional service -stores data across multiple AZ, onprem server access EFS using AWS Direct Connect.


***DATABASE:****AMAZON RELATIONAL DATABASE SERVICE-RDS*** RDBMS

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups

(Amazon RDS) is a service that enables you to run relational databases in the AWS Cloud(Amazon Aurora database).


Amazon EBS is Primary storage used by RDS instances


RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups. 

You can integrate Amazon RDS with other services to fulfill your business and operational needs, such as using AWS Lambda to query your database from a serverless application.

Amazon RDS has a number of different security options which offer encryption at rest (protecting data while it is stored) and encryption in transit (protecting data while it is being sent and received)..Amazon RDS database engines..


***Which part of AWS allows for RDS Instance 

The AWS Management Console lets you create new RDS instances through a web-based user interface.

You can also use AWS CloudFormation to create new RDS instances using the CloudFormation template language.


*Amazon RDS is available on 6 supported database engines:

Amazon Aurora

PostgreSQL

MySQL

MariaDB

Oracle Database

Microsoft SQL Server


RDS 2 key features:

1)RDS can have multiple AZ integration for disaster recovery/failover

2)RDS can read replicas -for performance [Read replica from 5 copies]


Migrate DB using Lift and Shift method using DMS-Database Migration Service:


Amazon Aurora is an enterprise-class relational database. It is compatible with MySQL and PostgreSQL. supports 15 replcias, backup to S3 -ready to restore,

It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon S3.



**********AMAZON DYNAMODB**

Amazon DynamoDB is a fast and flexible NoSQL DB service for all applications that need consistent, single milli second latency at any scale.

It is a key-value non relational database service(granual API) which delivers single-digit millisecond performance at any scale.


“NoSQL databases” as they use structures other than rows and columns to organize data. with this structural approach for nonrelational databases with key-value pairs, data is organized into items (keys), and items have attributes (values).


Serverless DB(no need of provision,maintain, operate sw, patch), auto scalable(shrinks or grows automatically-up to 10 trillion requests per day), stores data across multiple drives, super performent, ms response time, reliability, purpose built,

Non Relational NoSQL DB, flexible schemas, add remove attributes at any time good for data sets for item to item. 


Key -value pair data..

*Columns in table can vary-which will not effect tables and other rows.


OLTP vs OLAP [Online transaction vs Online analytics processing]


**********Amazon Redshift [DWH Service]

Amazon Redshift is a data warehousing service used for big data analytics(OLAP). It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.

-Scalbility, High avaialability -Multi AZ, Anti patters m Online Txn processing [OLTP]


**AWS Neptune is AWS Graph Database:Scalability, availability..


DATA LAKE- is an approach that allows you store massive amounts of data in central location that is readily available to categorizem processm,analyse and consumed by diverse groups within organization.


-Redundancy Defect failre, Durable data storage, Fault Isolation and traditional horizontal scaling- OUT, Sharding (to process data faster)

-Right sizing, Elasticity to expand/shrink, 


***CACHING:

Application caching [Elastic cache]

Edge caching [CDN cloudfront]


***GLOBAL AWS SERVICES /REGIONAL

a)IAM

b)Route 53

3)CloudFront (CDN)

4)SNS-Simple Notification Service

5)SES-Simple Email Service

6)S3 Bukcets -Region wise


***AWS SES-Amazon Simple Email Service (SES) is a cost-effective email service built on the reliable and scalable infrastructure that Amazon.com developed to serve its own customer base. With Amazon SES, you can send transactional email, marketing messages, or any other type of high-quality content to your customers.



****AWS services for ONPREMISE: Own DC

a) Snowball [disc- load data and ship back to Amazon] 80TB- move data to AWS within 1 week to load in S3.

b) Snowball Edge [ Lamda functions to work no AWS connectivity]

c)Storage gateway - cache inside DC and replica in S3 

d)Code Deploy [deploying apps to on premis/into EC2]

e)Opsworks (same as beanstalk) configuration management service uses to automate deployment into EC2/Onpremises

f)IOT GreenGrass- connect to devices and onpremises 



***AWS DATABASE MIGRATION SERVICE (DMS)***

AWS Database Migration Service (AWS DMS) enables you to migrate relational databases, non-relational database and data stores into cloud.

With AWS DMS, you move data between a source to target database. The source and target databases can be of the same type or different types. During the migration, your source database remains operational


*Enables to test apps against prod data without effecting prod users

*Cobmine several databases into a single db

*Sends ongoing copies of data to other target source instead of one-time migration.

***Additional database services

1)Amazon DocumentDB is a document database service that supports MongoDB workloads. (MongoDB is a document database program.)

2)Amazon Neptune is a graph database service. You can use Amazon Neptune to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.

3)Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service. 

You can use Amazon QLDB to review a complete history of all the changes that have been made to your application data.

4)Amazon Managed Blockchain is a service that you can use to create and manage blockchain networks with open-source frameworks. Blockchain is a distributed ledger system that lets multiple parties run transactions and share data without a central authority.

5)Amazon ElastiCache is a Web service that adds caching layers on top of your databases to help improve the read times of common(frequent caled) requests. It supports two types of data stores/open-source in-memory caching engines: 

1)Redis-

2)Memcached- 


6)Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB.It helps improve response times from single-digit milliseconds to microseconds. (No SQL)


7) RedShift (OLAP) for business interlligence/data wherehousing (DWH)


****RDS INSTANCE: Amazon EBS is Primary storage used by RDS instances/ block level storage.

Services-EC2 --> 

Database- RDS --> create Database--> Amazon Aurora (AWS db with 6 copies of AZ's) --> use Template production --> DB instance identifier -->username craete - enter master password - DB instance size as micor, -->Storage , enable auto scaling --> Availability --> Connectivity -->  DB options as Initial DB name --> Backup option --> Create Database.. will take 10-15min to create RDS db instance... status as creating -->available

Select RDS instance details to see - Monitoring cloudWatch metrics.. 

End point/port of RDS instance.


****DNS_ Domain Name Systems - converts to domain names from IP addresses..and connect to webservers

*ROUTE 53: Amazons DNS service..its global similar to S3 and IAM. 

*SERVICES-->Networking --> Route 53--?


****AWS SECURITY MODEL**** 

Shared Responisbility model of both customers and AWS for AWS security.

Customer- security in the cloud: Patching software on Amazon EC2 instances & Setting permissions for Amazon S3 objects, data security with encryption etc

AWS- security of the cloud: Maintaining network infrastructure,Implementing physical security controls at data centers,Maintaining servers that run Amazon EC2 instances


IMP****The customer is responsible for patching the Operating System for "Infrastructure as a Service solutions", but AWS is responsible for patching the Operating System for "Platform as a Service solutions".



********AWS Identity and Access Management (IAM)

IAM enables you to manage access to AWS services and resources securely.   

IAM gives you the flexibility to configure access based on your company’s specific operational and security needs. You do this by using a combination of IAM features, which are explored in detail in this lesson:


**AWS account root user

When you first create an AWS account, begin with an identity known as the root user. The root user is accessed by signing in with the email address and password that you used to create your AWS account. You can think of the root user as being similar to the owner of the coffee shop. 

Use rootuser tasks include changing your root user email address and changing your AWS support plan like limited tasks alone.


CREATE AWS ACC TO ESTABLISH ROOT USER-->

CREATE FIRST IAM USER WITH PERMISSION TO CREATE USER-->

LOGIN AS IAM USER AND CREATE OTHER USERS-->

ACCESS ROOT USER FOR LIMITED TASKS ALONE


***IAM users

An IAM user(person/application) is an identity that you create in AWS, that interacts with AWS services and resources. It consists of a name and credentials.


By default, new IAM user has no permissions associated with it. To allow the IAM user to perform specific actions in AWS (like launching an Amazon EC2 instance or creating an Amazon S3 bucket)you must grant the IAM user the necessary permissions.

Even if you have multiple employees who require the same level of access, you should create individual IAM users for each of them. This provides additional security.


*********IAM groups

An IAM group is a collection/group of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.

Assigning IAM policies at the group level also makes it easier to adjust permissions when an employee transfers to a different job


*****IAM policies

An IAM policy is a document that allows/denies permissions to AWS services and resources. Policy is a JSON document which carries Effect, Action and Resource details for an IAM user/group.

IAM policies enable you to customize users’ levels of access to resources. For example, you can allow users to access all of the Amazon S3 buckets within your AWS account, or only a specific bucket.



****IAM roles

An IAM role is an identity that you can assume to gain temporary access to permissions.  


Before an IAM user, application, or service can assume an IAM role, they must be granted permissions to switch to the role. When someone assumes an IAM role, they abandon all previous permissions that they had under a previous role and assume the permissions of the new role.

IAM roles are ideal when access to services or resources needs to be granted temporarily, instead of long-term.  


If you want to delete active role- make inactive and then delete role.

Role --> Create Role --> choose service as EC2/AWS service type -->attach a policy to role (S3Full Access policy). -->Enter Rolenamae, description -->Create --> new role created.

Services- EC2 -- Compute --attach role to running instance -->Select EC2 instance -->Actions --> Instance settings --> Attach/replace IAM Role.

select IAM Role --> Apply --> now role is attached to EC2 instance.

cd ~ //will go to home dir

rm -rf .aws..

Roles are much secure than using key ID, secret keys 

Role can be attached with multiple policies

Roles are global/universal, not region specific..

 


*****Multi-factor authentication [MFA] Eg: Root account which must have MFA]

In IAM, multi-factor authentication (MFA) provides an extra layer of security for your AWS account. You might have needed to provide your password and then a second form of authentication, such as a random code sent to your phone. This is an example of multi-factor authentication.



*******AWS Organizations*****

Suppose that your company has multiple AWS accounts, then use AWS Organizations to consolidate/manage multiple AWS accounts within a central location.

When you create an organization, AWS Organizations automatically creates a root, which is the parent container for all the accounts in your organization. 


In AWS Organizations, you can centrally control permissions for the accounts in your organization by using service control policies (SCPs). SCPs enable you to place restrictions on the AWS services, resources, and individual API actions that users and roles in each account can access or and Organization Unit(OU).


In AWS Organizations, you can apply service control policies (SCPs) to the organization root, an individual member account, or an OU. An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.


You can apply IAM policies to IAM users, groups, or roles, where You cannot apply an IAM policy to the AWS account root user.



****Organizational units [OU]

In AWS Organizations, you can group accounts into organizational units (OUs) to make it easier to manage accounts with similar business or security requirements. When you apply a policy to an OU, all the accounts in the OU automatically inherit the permissions specified in the policy.


****COMPLIANCE*****

**AWS Artifact is a service that provides on-demand access to AWS security and compliance reports and select online agreements. 

AWS Artifact consists of two main sections:

1)AWS Artifact Agreements-you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations

2)AWS Artifact Reports-AWS Artifact Reports provide compliance reports from third-party auditors. These auditors have tested and verified that AWS is compliant with a variety of global, regional, and industry-specific security standards and regulations


**In the Customer Compliance Center, you can read customer compliance stories to discover how companies in regulated industries have solved various compliance, governance, and audit challenges.


*******Denial-of-service attacks

1)A denial-of-service (DoS) attack is a deliberate attempt to make a website or application unavailable to users.

2)Distributed denial-of-service attacks

In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that aims to make a website or application unavailable. This can come from a group of attackers, or even a single attacker. The single attacker can use multiple infected computers (also known as “bots”) to send excessive traffic to a website or application.

To help minimize the effect of DoS and DDoS attacks on your applications, you can use AWS Shield.


****AWS Shield

AWS Shield is a service that protects applications against DDoS attacks(mlicious attack to disrupt traffic by flooding a website with large fake traffic).

AWS Shield provides two levels of protection: Standard and Advanced.

1)Standard-AWS Shield Standard automatically protects all AWS customers at no cost by default. It protects your AWS resources from the most common, frequently occurring types of DDoS attacks. Free

2) Advanced-AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. 3000 USD/Year


****ADDITIONAL SECURITY SERVICES**

1) AWS Key Management Service (AWS KMS)

AWS Key Management Service (AWS KMS) enables you to perform encryption operations through the use of cryptographic keys. A cryptographic key is a random string of digits used for locking (encrypting) and unlocking (decrypting) data.


2)AWS WAF(Web Application Firewall)

AWS WAF is a web application firewall that lets you monitor network requests that come into your web applications. 

AWS WAF works together with Amazon CloudFront and an Application Load Balancer,However, it does this by using a network web access control list (NACL) to protect your AWS resources. You create allow/deny rules.

NACL is optional layer of security for your VPC which acts as subnet level

SecurityGroups acts as instance level and implicitely denies all traffic, You create allow rules..


3)Amazon Inspector- helps to improve the security and compliance of applications by running automated security assessments. It checks applications for security vulnerabilities and deviations from security best practices.

After Amazon Inspector has performed an assessment, it provides you with a list of security findings. The list prioritizes by severity level, including a detailed description of each security issue and a recommendation for how to fix it.


4)Amazon GuardDuty-is a service that provides intelligent threat detection for your AWS infrastructure and resources. It identifies threats by continuously monitoring the network activity and account behavior within your AWS environment.

GuardDuty then continuously analyzes data from multiple AWS sources, including VPC Flow Logs and DNS logs


=====================================================

****AMAZON DEVELOPMENT TOOLS****

Continuous Delivery, push code from dev to prod...to increase the speed of the process.. build/test/release/deploy..

AWS developer tools

1.AWS CodePipeline- relaese automation service-continuosly delivery service, full control, backbone for cont delivery tool chain;

2.AWS CodeCommit-source control service, private git repositories, no size limit, - versioning,data encrypt,

3.AWS CodeBuild-buid,test and execution service, execution enzine, ondemand on containers, parelells

4.AWS CodeDeploy- deploy and archestration service, do downtimes, monitors health of dev, 

5.CodeStar-Continuous delivery tool chain, automated continuous deployment, change and push code, security will be added.. 


Proj has own dashboard to see the satus of tasks.

***SKIP DEV DOWNTIME- USE MULTI AZ devmt option to handle failures/application downtime.

The AWS Global Infrastructure is centered around Regions and Availability Zones (AZs). Each AWS Region is a separate geographical area. Each AWS Region is further made up of multiple separate locations called Availability Zones. By using a multi-AZ configuration, it is possible to build a highly redundant and highly available system that can continue to operate even if one AZ is stopped, but it cannot be handled if a failure occurs over the entire region. .. In this scenario, near-zero downtime is a non-functional requirement and requires a configuration that even addresses region failures. 


=====================================================

*****AMAZON IAM(Identity & Access Mangemnet)SERVICE INTRODUCTION

To securely control individual and group access to your AWS resouurces.

IAM User; IAM group permissions 

IAM provides fine grain control option to control access.


IAM- User/group- AWS CLI/SDK/IAM management console

Create Group -->then create user and assign to group


Full access/Read only - POLICIES which is JSON document mentioend about access to resources. JSON policy has below 3 elements:

ACTION- who and what authorised and which tasks are allowed to perform?

CONDITION- which condition need to met for the authorization

RESOURCE- resources to which authorised tasks are performed.


Below are the valid 3 IAM user access types to the AWS cloud:

1) Using AWS SDK

2) Using AWS Console

3) Using CLI-programatic


***STEPS to create IAM policy and access:

STEP1-Create a IAM Group

STEP2-Create IAM policy and assign to group

STEP3-Create IAM user and assign to the group


Create a IAM Role with expected policy and assign, when you want to use permissions on temporary based service accesses.


=====================================================

**ABBREVIATIONS:

**AWS ACM- AWS Certificate Manager:

You can use a server certificate provided by AWS Certificate Manager (ACM) or one that you obtained from an external provider. You can use ACM or IAM to store and deploy server certificates


**AWS CAF: AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) to help organizations design and travel an accelerated path to successful cloud adoption. 



====================================================

Sample Questions:

1] TAM:- The AWS Support Enterprise Plan offers a Technical Account Manager (TAM). TAM provides users with a variety of recommendations and guidance to help them plan and build solutions according to best practices and keep their AWS production environment healthy under Enterprise plan type:


AWS Support Concierge service- assists customer on billing and account inquiries.


AWS Support API provides access to some of the features of AWS support center via an API


AWS Infrastructure event management is a short term engagement to offer architectural and scaling guidance for an event.(ex: new product launches, infra migrations etc)



2]The "AWS Abuse team" can assist you when AWS resources are used to engage in the following types of abusive behavior: ・Spam: ・Port scanning: ・Denial-of-service (DoS) attacks: ・Hosting objectionable or copyrighted content: ・Distributing malware:


3]Amazon S3 (Simple Storage Service) allows you to access your bucket with both virtual host-style URLs and path-style URLs


4]CloudWatch allows you to set a billing alarm that triggers a notification when your usage charges exceed a set threshold. This CloudWatch alarm can also set SNS based on your email address as a notification function

It uesd for Monitoring below services/applications/performance/AWS services

Compute, EC2 instance, Auto scaling ,Route 53 health checks, CDN, EBS volumes, storage gateways, Underlying physical services[CPU/Network/DIsk/health status]


CloudWatch Events triggers every 5 min bydefault -customizable to 1 min intervals by detailed monitoring.

You can create cloudwatch alarms which trigger notifications. CloudWatch is all about performace.


5] By acquiring the AMI and then deleting the instance, it is possible to restore the EC2 instance and reproduce the application configuration when using it again. This is the best response for minimal cost.


6]AWS Config continuously audits and evaluates whether your AWS resource settings meet corporate policies and compliance. Config allows you to define rules for provisioning and configuring AWS resources. Any resource settings or configuration changes that deviate from the rules will automatically trigger SNS notifications to help identify compliance gaps. You can also leverage dashboards to visually view your overall compliance status and quickly identify non-compliance resources.


**SERVICE HEALTH DASHBOARD:

status.aws.amazon.com

way of seeing all differnt AWS services health status from all regions, daily ,historical information per day basis, RSS subscription option to see status for a service/region based.

While the Service Health Dashboard displays the general status of AWS services & availability of services.


***PERSOANL HEALTH DASHBOARD

Dashboard has personalised view with you actually used services...

Service --> Mangement & governamce --> Personal health dashborad-->

Dashboard -->setup alerts etc...

AWS Personal Health Dashboard provides alerts and remediation/troubleshooting guidance when AWS is experiencing events that may impact your resources.

Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resource



7]By using the Direct Connect gateway, which is a related function of Direct Connect, it is possible to connect regions in a wide band using a dedicated line.


8]AWS Cost and Usage Reports provide the most comprehensive cost and usage data, including metadata such as services, pricing, and bookings. The AWS Cost and Usage Report lists the usage of each service used by your account and its IAM users as hourly or daily line items, as well as tags activated for cost allocation


9]AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization [OSRPC]


10]Snowball Edge can move 80TB of data capacity with one appliance, so you can transfer 155TB of data with two


***AWS Snow Mobile:

AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100 Petabytes (PB) per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration



11]By installing the CloudWatch agent on your EC2 instance, you will be able to retrieve data inside your EC2 instance that cannot be retrieved with CloudWatch metrics alone

You can set up CloudWatch Logs on CloudTrail to monitor trail logs so you'll be notified when certain activities occur.


13] SPOT-- Spot Instances are the cheapest instance type with up to 90% discount. A spot instance is an instance that exists as a server for AWS management and is a seedling. Because it is a special instance that is temporarily rented to the user, the user can use it at a low price. Spot Instances are priced at the bid price at the time of purchase. As long as your bid is above the current spot price, you can use that instance


14]The following services are available at Edge Location. CloudFront, Route53, AWS WAF, AWS Sheild, Lambda;


15]When making an SSH connection to an EC2 instance Linux server, take the following actions depending on the OS of the PC. -MAC connects to the EC2 instance from the terminal. -Windows uses SSH software to connect to the EC2 instance.


16]The TCO calculation tool uses the amount of data such as storage, the number of virtual servers, the number of servers, etc. as calculation items.


DynamoDB is a global service set up in a region.



Trusted Advisor is an online resource that helps you reduce costs, improve performance, and improve security by optimizing your AWS environment. Trusted Advisor provides real-time guidance to help you provision your resources according to AWS best practices.


Each AWS Region contains several different locations or Availability Zones. Each Availability Zone is independent of the other Availability Zones and is immune to other AZ failures. Availability zones consist of one or more data centers. AZ within a region have a low latency link network connection to other Availability Zones within the same region. This allows data to be replicated synchronously between data centers, allowing failover to be automated. Therefore, a low latency link connection allows you to synchronize your data as much as possible.

Elastic Load Balancing & AZ(Available Zones) should be considered to work in failures (Design for failure principle to follow)



Amazon Lightsail is a platform as a service (Paas) example:

It's best to use "Amazon Lightsail" rather than an EC2 instance to build small websites and applications. Lightsail is ideal for simpler workloads, faster deployments, and getting started with AWS. Lightsail is a VPS designed to start with small specifications and extend. It's the perfect service to use when building simple web apps and applications.


Amazon GuardDuty is a threat detection service that continuously monitors malicious operations and malicious operations. It mainly analyzes network information such as VPC flow logs to determine threats.


WS Managed Microsoft AD makes it easy to extend your existing Active Directory to the AWS cloud. This makes it possible to integrate Active Directory used in the on-premises environment and IAM management and use it for user management.


The AWS Service Catalog allows you to create and manage a catalog of IT services that are approved for use on AWS. You can apply IAM permissions in the AWS Service Catalog to control who can view and modify your products and portfolios.


***AWS PRICING

Pricing Models

a) Capex- Stands for Capital Expenditure whre you pay upfront. Fixed and sunk cost. It is buying storage LB.nw as a bunch..

b) Opex- Stands operational expenditure where you pay for what you use.. similar to uitility bills 


***5 PRICING POLICIES:

CAPEX- OPEX :--

Pay as you go

Pay less when you reserve

Pay even less per unit by using more

Pay even less as AWS grows

Custom pricing.


***Understand fundamentals of Pricing

Start early with cost optimization

Maximise the power of flexibility

Use the right pricing model for the job


3 Drivers of cost with AWS

1)Compute

2)Storage 

3)Data Outbound- [not data in]


AWS services are prices independently and transperently..dont pay when they are not running....

Pricing Models based on products..


***AWS FREE SERVICES;

AWS VPC -virtual DC in cloud

Elastic Beanstalk - resources provisence is not free

CloudFormation - - resources provisence is not free

IAM -

Auto Scaling -

Opsworks- similar to EBS DevOps product- resources provisence is not free

Organizations &Consolidated Billing -

AWS Cost Explorer

App Sync

Amplify


***VPC, Elastic Beanstalk, CloudFormation, IAM, and Auto-Scaling, Ops work, Consolidated billing are free AWS services. Please keep in mind that with VPC, Elastic Beanstalk, CloudFormation, and Auto-Scaling, the underlying provisioned resources will incur charges.


***EC2 Pricing- What determines price?

-Clock hours of server type [sec/hour

Instance Type

Pricing Models [reserved(1-3 yrs contract), spot(runs apps based on your price), ONdemand(fixed rate), Dedicated Host (existing servers)

Number of instances

Load balancing [ network > classic balancing]

Detailed monitoring [turn off]

Auto scaling - more EC2 -more pay

Elastic IP addreses -

OS/ SW packages..


**Price for Lambda:

Alexa skills realted functions.. 

1)Request Pricing 

Free Tier-1 milion req per month-, then 0.20 $ per next 1 min requests

2)Duration Pricing

400K GB per month free ,

3) Additional Charges 

Lamda read & write requests will determine price


***EBS Pricing 

Volumes /GB

snapshots .GB

Data transfers


***S2 pricing

storgage class (IA..)

Storage

Requests(GET/PUT)

Data transfer 


**Glacier Pricing

Storage 

Data access/retrieval time [longer is cheap]


**Snowball pricing

PB scale data transport solution to transfer large amount data into cloud.

-Service fee per job [50TB-200USd, 80 TB-250USD

-daily charge - first 10 days free, after 15$ per day

-Data transfer - into S3 cloud is free, Data transfer out is not free


**RDS Pricing

-clock hours of server time

-DB characterestics

-Database purchase type

-Number of DB instances

-Provisioned storage

-Additional Storage

-Requests

-Deployment Type

-Data Transfer 


**CloudFront

-Traffic Distribution

-Number of request

-Data transfer out

=======================================================

**Elastic Transcoder: Old way to transcode videos to streaming formats


**AWS Elemental Media Convert- (Latest for Elastic Transcoder)

transcode videos to streaming foramts

Overlays images

Insert videos clips

Extracts captions data

Rebust UI

==================================================================


***BUDGETS vs COST EXPLORER

Custom budget costs to trigger alerts -predit cost BEFORE they incurred

Cost exploer-user interface-explore cost AFTER they have been incurred.

AWS-->Profile --> My Billing Dashboard --> Budgets

AWS-->Profile --> My Billing Dashboard --> Cost Explorer 

In both AWS-Budget & CloudWatch, alarms can be set to monitor spending on your AWS Account

Plan your service usage, cost and instance reservations using AWS bugets..

First 2 budgets are free..

Create monthly/yearly budgets-->Refine and manage alerts on Budgets


**AWS COST EXPLORER: It is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase..

Default and custom reports, Forecasting options..

Visualize data at a monthly, daily basis

use Filter and grouping to dig more into data..


***AWS DIFFERENRT SUPPORT PLANS

Basic

Developer

Business  -One hour support on production system failure.

Enterprise - 1500per/m , has TAM support (Technical Account Manager)


***TAGGING & RESOURCE GROUPS:

TAGS-->are  words or phrases that act as metadata for organising your airs attached to AWS resources 

-Meta data 

-Inhertited 

-Specific information [EC2- public private; ELB- porrt config, RDS-DB Engine etc.

TAG EDITOR is used to find resources and to add tags. It is a Global AWS service. 


*RESOURCE GROUPS- makes easy to group your resources using tags, can group resources that share one or more tags.

A resource group is a collection of resources that share one or more tags (or portions of tags)

You can use resource groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time.

Resource groups contain info as below:

Region, Name, Emp id, Dept 


Resource Groups menu --> Saved &Create Group /Tag Editor 

Create a Group -- tag based --> Classic Group is region based 


AWS Systems manager - allow to manage AWS services 

It can be used to stop multiple instances at all push of a single button click, instaed of turning off EC2 instances one by one.


***AWS Organizations & Consolidated Billing;

AWS Organizations-->Always enabled MFA on paying/master account

Paying/Master account should be used for billing purpose only[do not deploy resources]

Organization Units (OU)are - linked with multiple AWS accounts 

Linked accounts should be max of 20 accounts per paying account.[can be change based on need]


**Consilidate your billing & payment methods across multiple accounts into ONE bill

Use Cost Explorer to to visualize usage for consolidated bill

Lets you take advantage of Volume discounts(more use and more save)


**Billing alerts :

When monitoring enabled on paying account, then billing data for all linked accounts is included

You can still create billing alerts per individual account.

In both AWS-Budget & CloudWatch, alarms can be set to monitor spending on your AWS Account



****CLoudTtail

CloudWatch- Monitors performance used for EC2 instances  CPU utilization, triggers when rewuired

CloudTrial- Auditing tool monitors API calls in the AWS platform

New user/ role will be monitored

-CloudTrial is on a per AWS account is enabled per region

-Can consolidate logs using/into S3 bucket

-Turn Cloudtrial on paying account and create bucket policy and use bucket in paying account - to log all monitors


***Consolidated billing allows you to get discounts on all your accounts..Unused reserved instances for EC2 are applied across the group


**AWS ORG LAB:

Console-->Profile --> My Organizations--> it is global service

--Create organization (full access) -->Invite account/Create Account-->

-->New Org Unit --> Apply policies to OUnit -->create a Policy as Serverless (ex-blocks EC2) -->Deny/Allow effect -->Add statement -->Create Policy --> 

Go to Org Unit --> select policy to Ounit..-->Click Attach policy -->Done


***AWS QUCIK STARTS & LANDING ZONES -LAB

1)Quick starts service page list all the AWS services..

It is a way of deploying environments quickly using CloudFormation templates by particular tech experts to reduce 100s of manual procedures into few steps

QUick Start is comprised of 3 parts

a) A reference architecture for deployment

b) AWS cloudformation template that automate and configure deployment

c) A deployment guide expaining architecture..



2)AWS landing zone-->2018 https://aws.amazon.com/answers/aws-landing-zone

Helps Enterprise-customers more quickly setup a secure using AWS AVML, multi-account AWS environment based on AWS ptactices.

This includes 4 accounts and add-on products can be deployed usingn AWS service catelog.

a)AWS organizations account

b)Shared service account

c)Log archive account

d)Security account


**AWS Account Vending Machine(AVM)- automatically provisions and configure new accounts via Service Catelog template & uses SSO for managing and accessing accounts.


***AWS Partner Network Program:

1)Cosulting Partner- service ,design, architect, build, migrate and manage customer workloads and applications.

2)Technology Partner-HW, connectivity, sw solutions hosted/integrated with AWS cloud.

Based on certified professionals vary consulting type as Basic, Advanced, Premier consuting..


**Different AWS Cost Calculatiors: to calculate costs estimated/actual etc

1) AWS simple monthly calculator [hosted on S3 static website]

Used to calculate your running costs on AWS on a per month basis, it is not a comparison tool.

https://calculator.s3.amazonaws.com/index.html

Add resources and check calculations... 


2)Total Cost of Ownership Calculator(TCO Calculator) 

Allows you to estimate how much we save when moving to AWS from on-premise model:

(based on required setup input for Onprem services-Pay as you go model) Server, Storage, Network, IT Labor costs 

https://aws.amazon.com/tco-calculator

TCO calc is used to compare costs of running infrastructure at  On-premise Vs-in the AWS cloud. 

It will generate reports that you can give to your C-level executives to make a business case to move to cloud.



====================================================

AWS SECURITY & COMPLIANCE :

AWS Services--> Security & Mgmt --> Artifacts 


Global - standards, USA level, SOC 1, SOC2, SC3 etc..

PCI DSS L1- compliance certification for online payment standard -


HIPAA certification attests to the fact that the AWS Platform has met the standard required for the secure storage of medical records in the US


**Shared Responsibility Model: (shared Responsibility varies based on the services used)

Security and Compliance is shared responsibility b/w AWS and customer.

https://aws.amazon.com/compliance/shared-responsibility-model

*AWS(Security of the cloud at host level) - Configuration of infra devices, Software/OS runs on S3, COMPUTE,STORAGE, DATABASE/RDS, NETWORKING, HARDWARE/AWS Globa Infra; Regions, Availability Zones, Edge Locations,DC , Elastic map reduce(allow EC2 instances and process complex data), DynamoDB, DATA Center ops, Disk Disposal(when storage reached end of life), Control physical access to comp resources, Patching nw infra.., Creating Hypervisors, Backup performing, Patching DB s/w, install DB software


*CUSTOMER(Security in the cloud at guest level)- Client Side data Encryption, server side encryption, NW traffic protection, OS of EC2(Elastic Compute cloud) , NW Firewall configuration rules, platform, Apps, IAM, secret access Key, Customer DATA. Patch updates, VPC mgmt, Filter traffic with security groups/ACL config, password complex rule setting, manage DB settings , build DB schema


BOTH: Configuration Management, data security, Patch management


****AWS WAF & SHIELD **

1)WAF- Web Applicaiton Firewall: 

It is a Web applicaiton Firewall designed to stop hackers; 

helps to protect your web apps from common exploits which may effect security or consume excessive resources.


2)AWS SHIED -(DDos)

It is managed Distributed Denial of Service desinged to stop DDoS attacks.

This protection service to safegaurd web applications running on AWS & provides automatic inline mitigations.

There are 2 tiers of AWS shied

a) SHIED Standard 

b) SHIED Advanced (Cost Protection-Yes charged of 3000$ per/month)

AWS Console--Services --> Security & complaince --> WAF & Shied -->


AWS SHIELD- Get 24/7 support from DDoS team and detailed visibility on DDoS events.

AWS FIREWALL MANAGER-Simplifies your AWS WAF Administration and manaintenance tasks across multiple accounts and resources. 


*****AWS INSPECTOR vs AWS TRUSTED ADVISOR vs CLOUDTRIAL

1)AWS INSPECTOR-

Used for inspecting EC2 instances on OS level for vulnerabilities.

Services--> Security, Identity & Compliance --> Inspector


2)AWS TRUSTED ADVISOR-Online Tool

inspects your AWS account as a whole(not just EC2). It does more than security checks, saving money,service limits,performance and fault tolerance etc. 

It is a global service like a CC TV, and checks for all services ..and provide recommended actions


It helps on 5 on Trusted Advisor Dashboard: can set email preferences on reports:

[[Cost optimization, performance, security, failut tollerance, service limits]]

Core checks and recommendations [CPSFS]

Full Trusted Advisor- Business & Enterprise Companies only- Highly Priced/charged service


Services--> Management & Governance--> Trusted Advisor- 

Upgrade to Business to get more services/benefits..


3)AWS CLOUDTRIAL- Monitor API logs from all AWS services.


***CLOUDWATCH Vs. AWS CONFIG:

CLOUD WATCH-ats about monitor performance of CPU, Network, Disk, Status Check, RAM utilizaiton, 

AWS CONFIG-to monitor configuration level changes of AWS account, Detailed view of congiguration of AWS resources in your AWS account.

CLOUDTRAIL -is to monitor API calls for the AWS services. 


**AWS PENETRATION TESTING:

An authorized simulated cyberattack on a comp system, performed to evaluate the security of the system.


CAN- Pen testing is a simulated cyber attack against your computer system to check for exploitable vulnerabilities.

Can carryout Pen tests against AWS infra without prior approval for 8 services

1. EC2 instances, 2.RDS, 3. CloudFront 4.Aurora, 5. API Gateway, 6.Lambda & Lambda edge functions 7. LightSail, 8.Elastic Beanstalk Environments 


Prohibiited services for pen testing:

DNS zone walking using Route-53, DDoS, Port flooding, protocol Flooding, request flooding 

CAN'T- do pen testing which need approval for above services and contct @ aws-security-simulated-event@amazon.com could take 7 days to be approved.



***AWS KMS (KEY MANAGEMENT SERVICE) - 

AWS KMS--> Original service for encryption and decryption for our data, Regional based service, shared physical hardware 

it Manages below and uses Envelope Encryption method:

For S2 objects, API keys, DB password, Ststems Manager parameter storage.

upto 4Kb encrypt and decrypt


***CLOUD HSM -(HARDWARE SECURITY MODEL)

Used for encryption and decryption service.

This is expensive and  KMS + Dedicated hardware security module.(Multi tenant HSM)

Single tenant, 

can deploy across multi AZ to handle failover..

(Federal information processing standatds) FIPS-140 -2 Level3 [Cloud HSM] 


***SECRETS MANAGER vs PARAMETER STORE:

**Secrets Manager : These are used to store passwords, DB connection strings , values will store encrypted as KMS

Set TTL(Time to Live) to expire values such passwords


**SECRETS MANAGER-similar to Systems Manager (SSM);similar to Paramters store but it charges more to get services below but charged one:

a)Automatically rotate passwords

b)Apply new key/password in RDS for you

c)Generate random secrets.. using SDK/CLI


**Parameters Store is free till 10000 API calls, for more go for Secrets manager 


***AWS COST AND USAGE REPORT:[Profile--> My billing Dashboard]

Generate a detailed CSV report , enables you to better analyze and understand/track your AWS costs/EC2 Reserved Instance costs in granular data level. 

Places/publishes these reports to S3 Bucket that you own.

use Athena to turn report into queryable database.

use QuickSight to visualize your billing data as graphs


**** GAURD DUTY:

Threat detection service that continuosly monitor malicious , supicious activity and unauthorized behavior using Mechine learning to analyse logs (Cloud trial, VPC flow and DNS logs)-

It will alert you with findings and automate using CloudWatch Events/3rd party service.

Scenario Based question -- > 

a)It uses Machine learning Algorithms , monitors and protects AWS account.

b)ONE CLick to enable (30 day free trial) -no s/w install required

c)Input data includes - Cloud Trial event logs, VPC Flow logs, DNS Logs


**AWS CONTROL TOWER:

Provides easy way to setup multiple AWS accounts at a time in few minutes, with required policies.

Large enterprises with multiple AWS accounts uses this service.


***AWS SECURITY HUB:

A comprehensive view of security alerts across multiple AWS accounts.

It aggregates, organizes, prioritizesm security alerts or findings -from thousands of accounts we use Security Hub service.


***COMPROMISED IAM CREDENTIALS:

Some times git passwords may expose in cloud, to handle this take below actions

*Determine- what resources those credentials have access to 

*Invalidate- the credentials so they can no longer valid using IAM

*Consider -invalidating any temp security credentials that might issued using cred

*Restore - appropriate access

*Review -access to yout aws account to confirm all is good.



**ATHENA --(Query Tool)

-Interactive query service to get data from S3 using SQL

-Serverless, nothing to provision, pay per query/per TB scanned

-No need to setup complext ETL processes

-works directly with data stored in S3

-used to query log files in S3, 

-Genearte business reports for S3 data

-Analyse AWS cost and usage reports

-Run queries on click stream data


**PII- Persinally identifiable Information [SSN, Passport, DOB etc]

**MACIE --(Security Service-PII)

Security service which uses Machine learning/AI and NLP(Natual Languag processing /) to discover ,classify and protect sensitive data in S3

-Uses AI to recognize if your S3 objects contain sensitive data such as PII

-Dashboards, reporting and alerts

-Works directly with data stored in S3

-Can also analyse Cloud Trial logs for suspicious logs

-Great for PCI-DSS and preventing ID theft..


*****************

AI SERVICES 

LEX, POLLY, TRNASCRIBE AND REKOGNITION:

LEX- like Chatbot using text or voice -conversational chatbot 

POLY converts text to life like voice :

Transcribe -converts Speech into text

Rekognition - converting images into tags/text 


***EC2 LICENSING:

Special licensing - Dedicated Host:

per socket/per -core, per-VM sw licences, then go for Dedicated host licencing model..


**AWS DIFFERENT COMPUTE SERVICES:

EC2 -VPC in cloud

LighSail- Simple Cloud service/ no customization with EC2

Lambda- serverless compute in cloud

AWS Batch - compute service for batch computing, plan/scedule and execute batch computing workloads

Elastic Benstalk- PaaS comp service

Serverless app reportistory- Deploy serverless apps [alexa]

AWS outposts-extending data centers on on-premises

EC2 Image builder -build your own custom EC2 images for Linux/Wind


**VPC Overview : Virtual Private Cloud

Logically isolated section of AWS Cloud where you can launch AWS resources in a virtual network you define. 


**VPN- Virtual Private network -connection b/w DC and VPC.


**ON PREMISE CONNECTION TO AWS:

VPN- HW virtual Private NW connection to your Datacenter:  [like connect office fomr WFH)

Extending network /data center.


AWS Direct Connect -dedicated  private network 

connection which can reduce cost, increase bandwidth throughput, consistent nw experience than internet based connections.


VPN over Direct Connect: for ultimate security + encrypted in AWS over Direct Connection sing a VPN.


**LAMBDA:(Serverless) -Ultimater Abstraction Layer:

Event driver compute service

Eg: Alexa -directly talking to lambda.. 


**CLoud History: 

Data centers- IaaS --Paas -- Containers --Serverless


*****Architecture of LAMBDA:

LamBDA Supported languages- 6: node.js, C#, Java, Python, PowerShell, Go 

Pricing-->Priced - number of requests(invocation) & Duration(execution time)- 

first 1 Million request free, then 0.20$ per 1 Mil thereafter

start from code begins GB per secn


Version Control- multiple versions code cab be used

Shared Responsibility model- You for Code; rest all takes care by AWS


LAMBDA :---No servers, Auto scaling, Super Cheap; -

Scales out-not Up automatically.

Functions are independent, Serverless


***QUESTIONS--CORRECTIONS:==============================

Amazon WorkSpaces provides a Desktop as a Service (DaaS) solution

They can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes.


**PER SECOND BILLING:

***Linux/Ubuntu based instances will charged as per one second increments, with minuimum of one minute. [4hr 2 min 3 seond] [if only 40 seconds= 1 min]


Other instances are billed nearest hour based [4hr 2 min 3 seond= 5 hrs billing]


-Per second billing available for On-Demand; SPOT, reserved forms, All regions and AZ's; Amazon Linux and Ubuntu ..



Vertical Scaling is increasing the size and computing power of a single instance or node without increasing the number of nodes or instances


The Auto Scaling Group can be used to scale out and scale in the instances as the demand dictates. This will save money and avoid having instances sitting idle for long periods of time.

AWS Auto Scaling monitors your applications and automatically adjusts your capacity to maintain steady, predictable performance at the lowest possible cost.


**5 pillars of a well architected framework: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization. [OSRPC]


Use the Limits page in the Amazon EC2 console to request an increase in the limits for resources provided by Amazon EC2 or Amazon VPC on a per-Region basis.


A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets


AWS Acceptable use policy- will provide information on performing penetration testing on your EC2 instances?

Terms & Conditions Policy- provides generic terms to be followed in using AWS services.


IAM: Identity and access management:

Create policies for each department that define the permissions needed. 

Create an IAM group for each department and attach the policy to each group.

Add each department's members to their respective IAM group.


With the "IAM policy simulator", you can test and troubleshoot identity-based policies, IAM permissions boundaries, Organizations service control policies, and resource-based policies.


**IAM entities are the users (IAM users and federated users) and roles that are created and used for authentication.

Identities are the IAM resource objects that are used to identify and group. You can attach a policy to an IAM identity. These include users, groups, and roles.

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic


**A Principal is a person or application that uses the AWS account root user, an IAM user, or an IAM role to sign in and make requests to AWS.


**AWS KINESIS- Service to stream data in real-time for a dashboard application

"Amazon Kinesis" makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.


****AWS ADS:

AWS Application Discovery Service helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises data centers


**Migration Hub:

AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions


***AWS SWF:

Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components


***AWS PINPOINT:

Amazon PinPoint is used to engage your customers by sending them targeted and transactional email, SMS, push notifications, and voice messages.


An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.



AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment


Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs),


Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases


AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers.


AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem


The Business Plan is the cheapest plan that will still provide the full set of Trusted Advisor checks.

The Enterprise Plan will provide the full set of Trusted Advisor checks, but it is the most expensive plan.


AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.


***Cost Allocation Tags: You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed/granular level.



"Amazon EMR" is a web service that makes it easy to process large amounts of data sets efficiently


Q: Which AWS service acts as a File system mount on S3:

"A file gateway" supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). 



Q:Which AWS service can be used to trace user requests from end-to-end through the application with microservices.?

AWS X-Ray service helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors


Q: Which of controls do customer fully inherit from AWS:

AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.


**Shared controls for AWS/Customer are:

Patch management controls

Database controls

Awareness & training controls



***"AWS Transit Gateway" is a network transit hub that simplifies how customers interconnect all of their VPCs, across thousands of AWS accounts and into their on-premises networks


***"A VPC peering" connection is a networking connection between two VPCs that enables customers to route traffic between them using private IPv4 addresses or IPv6 addresses


=================================================

KEY URLs:

https://aws.amazon.com/tco-calculator

https://calculator.s3.amazonaws.com/index.html

https://docs.aws.amazon.com/general/latest/gr/glos-chap.html

https://aws.amazon.com/compliance/shared-responsibility-model

Comments

  1. This article is a creative one and the concept is good to enhance our knowledge. Waiting for more updates.
    Learn AWS Online
    Aws Certification Course Online
    Aws Developer Training Online

    ReplyDelete
  2. Great, I like this blog in which you have explained your content well. Keep posting such information. If you want to get domain plan information which has some great Instant Domain Search. Thanks for sharing this blog.

    ReplyDelete
  3. Amazing!! This blog is such as very interesting and very helpful. I really like your information. Onlive Server also provides the Linux VPS you can get from Linux VPS.

    ReplyDelete
  4. What an amazing blog....You defined AWS Cloud Practitioner Syllabus its a really very helpful. If you want know more about VPS Server hosting so you may know toManaged Dedicated Server via Onlive Server. It provide many technical help.

    ReplyDelete
  5. Thank you for your share wonderful information. This is a blog very informative for everyone. Keep writing a blog for us like this. Apart from it, you want information about a dedicated server, then click my link USA Dedicated Server

    ReplyDelete
  6. Amazing!! This blog is such as very interesting and very helpful. I really like your information. The complete service of this company is fantastic! Professional support 24/7 and exceptional kindness with clients are above all expectations! Thanks, all of you! I highly recommend it! you can get from best WordPress hosting .

    ReplyDelete
  7. Hii... I read your article is very nice. The information you provided is useful to all of us about Linux and Windows VPS and most important is you define it very carefully. Onlive Server provides brilliant VPS Server Hosting. You must know about Cheap Linux VPS hosting. you should choose Cheap Linux VPS hosting.Its advanced technology is very beneficial for users at an affordable price.

    ReplyDelete
  8. Amazing blog thanks for sharing.. Your blog all about AWS cloud practitioner syllubus. If you are looking for fastest and cheapest Best Dedicated server you can ask us for more details and services.

    ReplyDelete
  9. Amazing!! This blog is such as very interesting and very helpful. I really like your information. Onlive Server also provides the best cheap dedicated server you can get from Cheap Dedicated Server.

    ReplyDelete
  10. woww.... amazing blog.... your blog is really very impressive. I like your blog. If you are looking for fastest and cheapest Malaysia Dedicated Server you can ask us for more details and services.

    ReplyDelete
  11. Thank you so much for your blog. I particularly found the blog explanation very clear, concise, and helpful. I really liked your Information. Keep up the good work friend. UKraine Dedicated Server

    ReplyDelete
  12. I read your article is very nice. Onlive Server provides brilliant VPS Server Hosting. You must know about Dubai VPS Server. you should choose #Dubai VPS Server. Its advanced technology is very beneficial for users at an affordable price.

    ReplyDelete
  13. Thank you so much for creating this blog. The blog explanation in particular was really clear, simple, and helpful. I really enjoyed your Information. Friends, keep up the wonderful work. UKraine Dedicated Server

    ReplyDelete
  14. Hey, your blog ideas are very amazing and informative. If you are looking for the fastest and cheapest Romania Dedicated Server

    ReplyDelete
  15. Hey, your blog ideas are very amazing and informative. If you are looking for the fastest and cheapest Romania VPS Server

    ReplyDelete
  16. Hello friend your blog is very instructive, and it contains a very good amount of knowledge. If you are looking for fastest and cheapest South Africa VPS Hosting you can ask us for more details and services. Thank you.

    ReplyDelete
  17. Such a wonderful information blog post on this topic, I am really impressed with your blog article to Keep sharing this type of informative blog post. India VPS Server.India VPS Server

    ReplyDelete

Post a Comment