SECP1513 TIS Reflections /
AWS Academy Cloud Foundations Badges

Introduction 

 

AWS Academy Cloud Foundations is intended for students who seek an overall understanding of cloud computing concepts, independent of specific technical roles. It provides a detailed overview of cloud concepts, AWS core services, security, architecture, pricing, and support. This course helps me to prepare for the AWS Certified Cloud Practitioner exam. 

 

After getting AWS Academy Cloud Foundations Badges, I was should be able to fulfil the objectives of the course:

 

 

  • Define the AWS Cloud.

 

I have achieved this objective in module 1. Cloud computing delivers computational power, database, storage, apps, and other IT resources on-demand through the internet. These materials are hosted on servers in big data centres situated all around the world. When you utilize a cloud service like AWS, the service provider controls your machines. These materials may be used as building blocks to create solutions that fulfil both business and technical needs. For various customers, multiple service models and deployment methodologies have arisen as cloud computing has increased in popularity. There are several distinct cloud service models and deployment strategies available. This knowledge can help you choose the correct cloud service model and deployment strategy for your needs. Cloud computing is the on-demand delivery of IT resources via the internet with pay-as-you-go pricing. Cloud computing enables you to think of (and use) your infrastructure as software. There are many AWS service analogs for the traditional, on-premises IT space. Amazon Web Services (AWS) is a secure cloud platform with a worldwide cloud product portfolio. This means you have on-demand access to computing, storage, network, database and other IT resources—and the tools to manage them. You can instantly create AWS resources. You can use the materials in minutes. AWS is adaptable. Your AWS environment may be dynamically scaled up or down to suit consumption trends and save money, or shut down temporarily or permanently. AWS billing becomes an operating expenditure rather than a capital one. AWS services can accommodate nearly any application or demand. You may use these services as building pieces to create complex, scalable solutions that you can adapt as your needs evolve.

From this module 1, I managed to get 80/100 for the grades. 

 

 

  •  Explain the AWS pricing philosophy.

 

I have achieved this objective in module 2. AWS costs are driven by computing, storage, and outbound data transport. These features change based on the AWS product and price model. In most circumstances, data transport between AWS services within the same AWS Region is free. There are certain exceptions, so verify data transmission rates before using AWS. Outbound data transmission is summed across services and paid at that rate. AWS Data Transfer Out shows on your monthly account. This is the AWS pricing philosophy. While AWS's service offerings have expanded tremendously, our pricing strategy has not. Every month, you pay for what you use. You can change your mind at any moment. NO LONG TERM CONTRACTS AWS provides cloud computing services. You just pay for the resources you use. It includes: Pay for what you use, pay less when you reserve, pay less when you use more and pay even less as AWS grows. Assisting new AWS users with free usage for up to a year (AWS Free Tier). Certain AWS services and features are free. To get started, new AWS customers get a year-long free Amazon Elastic Compute Cloud (Amazon EC2) T2 micro instance with free usage of Amazon S3, Amazon EBS, Elastic Load Balancing, AWS data transfer, and other AWS services. While these services are free, other AWS services used alongside these services may incur costs. For example, when you automatically scale up EC2 instances, you will be charged. The best way to estimate costs is to examine the fundamental characteristics for each AWS service, estimate your usage for each characteristic, and then map that usage to the prices that are posted on the AWS website. The service pricing strategy gives you the flexibility to choose the services that you need for each project and to pay only for what you use.

From this module 2, I managed to get 80/100 for the grades. 

 

 

  • Identify the global infrastructure components of AWS.

 

I have achieved this objective in module 3. Regions are the backbone of AWS Cloud. AWS has 22 regions. AWS Regions are physical locations with several Availability Zones. Each Availability Zone has one or more data centres. Regions are segregated to achieve fault tolerance and stability. Resources from one Region are not immediately reproduced. Data stored in a Region is not copied outside that Region. If your company requires it, you must duplicate data across Regions. Availability Zones are separate areas inside each AWS Region. Each Availability Zone allows for more highly available, fault-tolerant, and scalable applications and databases than a single data centre. Each Availability Zone has numerous data centres (usually three) and hundreds of thousands of servers. They're AWS Global Infrastructure partitions. Availability Zones have their own power infrastructure and are geographically isolated from each other by several kilometres, however they are all within 100 km of each other. The AWS architecture is built on data centres. Customers don't designate a data centre for resource deployment. It is the highest detailed degree of specification a client may make. But the data is stored in a data centre. Amazon has modern, high-availability data centres. The availability of instances in the same area might be affected. If you host all your instances in one location and it fails, none of your instances will work. The AWS Global Infrastructure has several valuable features. First, it is elastic and scalable. This means resources can dynamically adjust to increases or decreases in capacity requirements. It can also rapidly adjust to accommodate growth. Second, this infrastructure is fault tolerant, which means it has built-in component redundancy which enables it to continue operations despite a failed component. Finally, it requires minimal to no human intervention, while providing high availability with minimal down time.

From this module 3, I managed to get 80/100 for the grades. 

 

 

  • Describe security and compliance measures of the AWS Cloud including AWS Identity and Access Management (IAM).

 

I have achieved this objective in module 4. AWS and the customer share responsibilities for security and compliance. This shared responsibility paradigm is intended to benefit customers. However, in order to provide the flexibility and control required to implement client solutions on AWS, the customer is still responsible for some elements of security. The distinction between who is accountable for security “of” and “in” the cloud. AWS operates, administers, and regulates all components, from software virtualization to physical security of AWS facilities. AWS is in charge of securing the infrastructure that powers all AWS Cloud services. The AWS Cloud services run on hardware, software, networking, and facilities. The client is responsible for data encryption in transit and at rest. The customer should also secure the network and maintain security credentials and logins safely. The client is also responsible for configuring security groups and the operating system on computer instances launched by them (including updates and security patches). It maintains and regulates components from the bare metal host operating system to hypervisor virtualization layer security and physical security of the facilities where the services run. It implies AWS is responsible for securing the worldwide infrastructure that runs all AWS Cloud services. AWS Regions, Availability Zones, and Edge Locations make up the worldwide infrastructure. AWS Identity and Access Management (IAM) controls access to compute, storage, database, and application services. This allows you to control which users may access which services. Authentication is a fundamental computer security concept. Consider how you authenticate yourself when you travel to the airport to catch your trip. Before you may access a restricted location, you must show a security official proof of your identity. Getting access to AWS resources in the cloud works similarly. When creating an IAM user, you choose the sort of access the user has to AWS resources. Users can be granted programmatic or AWS Management Console access. You can provide programmatic, console, or both forms of access. If you allow programmatic access, the IAM user must provide an access key ID and a secret access key when calling the AWS API through the AWS CLI, SDK, or other development tool. If you allow AWS Management Console access, the IAM user must complete out the browser login forms. The user must provide either the 12-digit account ID or the alias. Username and password are required. If the user has activated MFA, they will be asked for an authentication code.

From this module 4, I managed to get 90/100 for the grades. 



 

  • Create an AWS Virtual Private Cloud (Amazon VPC).

 

I have achieved this objective in module 5. Virtual Private Cloud (VPC) is a service provided by Amazon Web Services (AWS) that enables me to create logically separated sections of the AWS Cloud where you may deploy AWS resources. Amazon VPC allows you to choose your own IP address range, create subnets, and configure route tables and network gateways. Your VPC can utilise both IPv4 and IPv6 for secure resource and application access. You may also configure your VPC's network. For example, you can provide your web servers access to the public internet. Backend systems (such databases or application servers) can be placed on a private subnet. For each Amazon Elastic Compute Cloud (Amazon EC2) subnet, you may employ various layers of protection, including security groups and network access control lists (network ACLs). Amazon VPC lets you set up virtual private clouds (VPCs). A VPC is a logically isolated virtual network in the AWS Cloud. Your own VPC. VPCs can span several AWS Regions and Availability Zones. After I construct a VPC, you may subdivide it. A subnet is a VPC's IP address range. Subnets share an Availability Zone. So I built subnets in various Availability Zones. Subnets can be public or private. Private subnets do not have direct internet connectivity.

From this module 5, I managed to get 90/100 for the grades. 

 

 

  • Demonstrate when to use Amazon Elastic Compute Cloud (EC2), AWS Lambda and AWS Elastic Beanstalk.

I have achieved this objective in module 6. Amazon EC2 delivers virtual computers, or IaaS. (IaaS). IaaS services provide you freedom and let you control your own servers. You pick the operating system, as well as the server's size and resource capabilities. Virtual machines are recognisable to IT experts who have worked with on-premises computing. Amazon EC2 was one of AWS's earliest services and is now one of its most popular. AWS Lambda is a self-service computing platform. No server deployment or management is required with AWS Lambda. You just pay for the computer time used. Many IT workers are unfamiliar with serverless technologies. Cloud-native designs provide tremendous scalability at a cheaper cost than hosting servers 24/7 to service the same workloads. Container-based services like AWS Elastic Container Service, AWS Elastic Kubernetes Service, AWS Fargate, and AWS Elastic Container Registry let you execute different workloads on a single OS (OS). Containers are more responsive than virtual machines. Container-based solutions remain popular. Finally, AWS Elastic Beanstalk delivers PaaS. (PaaS). It enables rapid application deployment by offering all required application services. This frees you up to focus on your application code rather than managing the OS and application server. AWS offers a variety of computing services to suit various use cases. The best computing service to employ depends on your use case. Legacy code often dictates the computational architecture you select. No one says you can't develop the architecture to use proven cloud-native concepts. Amazon EC2 delivers cloud-based virtual computers with complete administrative control over the Windows or Linux operating system. Windows 2008, 2012, 2016, and 2019 are supported, as are Red Hat, SuSE, Ubuntu, and Amazon Linux. To distinguish it from the host operating system, a guest operating system runs on a virtual computer. The host OS is installed directly on any server hardware that hosts virtual machines.

From this module 6, I managed to get 80/100 for the grades. 

 

 

  • Differentiate between Amazon S3, Amazon EBS, Amazon EFS and Amazon S3 Glacier.

I have achieved this objective in module 7. Storage is an AWS essential service. Instance stores (ephemeral storage), Amazon EBS, Amazon EFS, Amazon S3, and Amazon S3 Glacier are some examples. Instance store is temporary storage added to your Amazon EC2 instance. As a device on an Amazon EC2 instance, Amazon EBS provides persistent, mountable storage. Amazon EBS can only be deployed inside the same Availability Zone. Amazon EC2 instances cannot share an Amazon EBS volume. Instances of Amazon EC2 can share a file system (Amazon EFS). Amazon S3 is permanent storage where each file becomes an object accessible via a Uniform Resource Locator (URL). Amazon S3 Glacier is for archiving data (for example, when you need long-term data storage for archival or compliance reasons). It offers permanent block storage for Amazon EC2 instances. Disks and other devices that store data persistently when power is removed. It's also known as nonvolatile storage. Availability Zone replication protects you from component failures in Amazon EBS volumes. It is built to last. Amazon EBS volumes deliver reliable, low-latency performance for your workloads. With Amazon EBS, you pay only for what you need and can scale up or down in minutes. Amazon S3 is a managed cloud storage service built for scalability and reliability. A bucket may hold a nearly unlimited number of items, and you can write, read, and remove them. Bucket names must be unique across all Amazon S3 buckets. Objects can reach 5 TB. Amazon S3 stores data redundantly across many locations and devices in each site. Your data is not tied to any one server, and you don't need to manage any infrastructure. You can upload as many items as you wish. Trillions of items and millions of requests each second. Simple, scalable, elastic file storage for AWS services and on-premises resources. A simple interface allows you to rapidly establish and configure file systems. Amazon EFS is designed to expand and shrink automatically as you add and remove data without affecting applications. It is designed to provide your applications with the storage they require. Because Amazon S3 Glacier stores data for a longer period of time than Amazon S3, you cannot access it instantly when you need it.

From this module 7, I managed to get 100/100 for the grades. 



 

  • Demonstrate when to use AWS Database services including Amazon Relational Database Service (RDS), Amazon DynamoDB, Amazon Redshift, and Amazon Aurora.

 

I have achieved this objective in module 8. Cloud-based relational database management is provided by Amazon RDS. AWS provides a service that sets up, manages, and grows a relational database without any ongoing administration. Amazon RDS delivers scalable capacity while automating administrative processes. With Amazon RDS, you can focus on your application, giving it the performance, high availability, security, and compatibility it deserves. With Amazon RDS, you can concentrate on your data and application. If your database is on-premises, the database administrator is in charge. Database administration activities include optimising applications and queries, configuring hardware, patching hardware, configuring networking and power, and controlling HVAC (HVAC). Moving to an Amazon Elastic Compute Cloud (Amazon EC2) database eliminates the need to maintain underlying hardware or data centre operations. But you're still in charge of updating the OS and managing software and backups. Setting up your database on Amazon RDS or Aurora reduces administrative duties. Moving to the cloud allows you to automate database scaling, high availability, backups, and patching. So you can concentrate on what matters most—optimizing your app. DynamoDB is a fast and versatile NoSQL database service for applications that require constant low latency. As part of the fault-tolerant design, Amazon controls the underlying data infrastructure for this service. It lets you construct tables and objects. A table can be expanded. To fulfil workload needs, the system automatically splits data. A table's storage capacity is practically unlimited. Some clients have production tables with billions of items. A NoSQL database allows entries in the same table to have diverse properties. This allows you to add properties as your programme matures. You can store newer and older format items in the same table without doing schema migrations. Businesses need analytics now, but constructing a data warehouse is difficult and costly. Setting up a data warehouse might take months and money. Amazon Redshift is a fully managed data warehouse that is easy to set up, utilise, and scale. Utilizing advanced query optimization, columnar storage on high-performance local discs, and massively parallel data processing, it allows you to conduct complicated analytic queries on petabytes of structured data. They return in seconds.

 

From this module 8, I managed to get 80/100 for the grades. 

 

 

  • Explain AWS Cloud architectural principles.

 

 

I have achieved this objective in module 9. Your cloud applications and workloads will benefit greatly from the AWS Well-Architected Framework. You may use it to analyse and implement your cloud designs. This framework was created by AWS after examining hundreds of client architectures. There are design principles and best practices for each pillar. Each best practice has a set of foundational questions. Each question has some context and a list of best practices. Snap, fly. Currently, many equipment (cameras and video cameras) are placed on light aircraft to collect footage of large cities and iconic landmarks. Each gadget creates picture elements that are time-stamped to the aircraft's clock. Onboard Capture equipment with external storage array streams the images assets. The Capture machine continually gathers navigation data such as GPS, compass readings, and elevation. A-Z Customers may examine photographs and videos of the product on the Any Company website. The photos come in many forms (for example, a large-scale, walk-around map). The site employs Elastic Load Balancing with HTTPS and an Auto Scaling group of EC2 machines running a content management system. An S3 bucket holds static website assets. Produce and Transport Any Company uses proprietary technology to create 3D models from images and video (extracting structure from motion). Render is a fleet of g2.2xlarges. The Render service creates 3D models from the Production queue and stores them in an S3 bucket. For clients to preview their purchases on the Any Company website, the Render service employs 3D models. These videos are in a different S3 bucket. An outdated preview is deleted every year. However, models are maintained for future projects. Organization, prepare, operate, and develop are the best practices for operational excellence. Operations teams must understand business and customer demands to enable business results efficiently. Operations teams develop and apply procedures to respond to operational events and support business needs. Operations teams collect metrics to assess business outcomes. It's critical to build operations that adapt to changing company context, objectives, and consumer demands.

From this module 9, I managed to get 80/100 for the grades. 




 

  • Explore key concepts related to Elastic Load Balancing (ELB), Amazon Cloud Watch, and Auto Scaling.

 

I have achieved this objective in module 10. Modern high-traffic websites must respond to hundreds of thousands, if not millions, of simultaneous user or client requests for text, photos, video, or application data. Generally, more servers are needed to handle big traffic. Application or network traffic is distributed among various destinations (Amazon EC2, containers, IP addresses, and Lambda functions) inside or across Availability Zones. Elastic Load Balancing adapts to changing traffic to your application. It can scale to most workloads. A load balancer takes client traffic and directs it to targets (like EC2 instances) in one or more Availability Zones. Adding listeners to your load balancer allows you to accept incoming traffic. A listener monitors connection requests. It specifies the protocol and port for client connections to the load balancer. It also has a protocol and port number for the load balancer to connect to the targets. You may also set your load balancer to do health checks, which monitor the targets' health and only transmit requests to healthy ones. When a load balancer identifies a bad target, it blocks traffic to that destination. When it determines that the target is healthy again, it restarts traffic routing to it. The load balancer types are set differently. With Application and Network Load Balancers, you create target groups and route traffic to them. With classic load balancers, you register instances. For DevOps engineers, developers, SREs, and IT managers, Amazon CloudWatch provides monitoring and observability. CloudWatch continuously checks your AWS resources (and your AWS apps). CloudWatch can gather and track metrics, or variables, for your resources and applications. You may use an alarm to automatically send a notice to an Amazon Simple Notification Service (Amazon SNS) subject or conduct an Amazon EC2 Auto Scaling or Amazon EC2 action. Examples of warnings include EC2 CPU utilisation, ELB request delay, Amazon DynamoDB table throughput, Amazon SQS queue length, and AWS bill costs. You may also set custom alarms for your apps or infrastructure. Incoming events (or changes in your AWS environment) can also be sent to destinations for processing using Amazon CloudWatch Events. There are a number of built-in targets as well as Amazon EC2 instances and AWS Lambda functions. CloudWatch Events is notified of operational changes. CloudWatch Events responds to operational changes by delivering notifications, activating functions, making modifications, and recording state information. Because computer power is a programmable resource, scalability in the cloud is flexible. Amazon EC2 Auto Scaling is an AWS tool that lets you manage application availability by adding or removing EC2 instances based on your rules. EC2 Auto Scaling's fleet management tools can help you keep your fleet healthy and available. Amazon EC2 Auto Scaling offers numerous options to tailor scaling to your applications' needs. You may add or delete EC2 instances manually, on a schedule, or use AWS Auto Scaling for predictive scaling. Together, dynamic and predictive scaling can speed up scaling.

 

From this module 10, I managed to get 80/100 for the grades.