SECP1513 TIS Reflections
Throughout the journey to completing this project with my teammates, Ain and Amos, I have learned a lot of new things and developed a lot of new skills. First and foremost, I have learnt to apply the knowledge of 4th IR technologies that I have learnt throughout this semester in this course. We chose to use integrated system technologies as the main frame of our low-fidelity prototype for our project. Other than that, I have also developed new interpersonal skills, which are critical thinking and communication skills. When we were given a task to find potential users or target users for our project, my team faced a lot of restrictions. In order to make the project a success, the main issue was that we are first-year students and yet we have never been to UTM. Also, with this current pandemic situation, it is really hard to gain a lot of information, such as the experience of our target users when they use the shuttle bus service in UTM. Therefore, we still continue our project, which is about the shuttle bus service system in UTM, even though we never know the experience, the real situation, or problems with the system. Hence, we managed to get the solution by doing a survey and asking a lot of questions to seniors and lecturers because they had been to UTM and experienced using the shuttle bus service.
Regardless of our aims, we faced barriers in implementing this integrated bus system in UTM. Time restrictions limited our project completion. Our concerns include a lack of web application development knowledge. We also have a time constraint for this task. Due to our busy schedule, we chose to work on this project daily. All these obstacles have developed a new skill for me, which is critical thinking. Through this journey, I am motivated by meeting set targets within the deadlines and goals of our team, as it gives me a sense of accomplishment and it's something that I can look back on and say "I achieved that." What motivates me more with this project is that we are working well as a team. In the end, we managed to spot the flaws and errors so as to ensure the end result of this project is as good as possible. I find it interesting that we have been working together from the beginning to find a way to solve a problem, overcome a challenge, and also come up with creative ideas to improve our prototype.
After completing this project, my goal is to take it to the next level. Our team agreed to create and assess this prototype, called Shuttle+, to let UTM students, staff, and visitors move around. In keeping with our goal, this system will allow bus drivers and UTM students to communicate. UTM students and staff could use the Shuttle+ system to book express bus tickets home and choose a pick-up point near UTM. This system informs UTM bus users of the bus's current location, departure time, and arrival time. We also decided to make this app more adaptable by allowing students to book activities or facilities available at UTM via our app, such as kayaking, cycling, and rock climbing. My goal is to get the administration to consider these good ideas that would help many students and staff navigate around UTM.
During the project, we obtain creative and inventive knowledge to generate new ideas or solutions. My goals can be achieved with perseverance and hard work. I realised I had to work hard to achieve my big goal. I've always thought that a strong desire to succeed would motivate me. There are several things I can do to improve my industry potential. I need to improve my soft and hard skills. I also need to work on my communication skills, which are vital nowadays. Communication skills are crucial for a computer science student to get a job. I also need to be aware of and quick to adopt the latest technology. To improve, I plan to join UTM societies like AIROST and CyberX, where I may compete and learn more. I will also keep taking part in extracurricular activities that can enhance my skills and potential in the industry. This will help me better explore different situations and improve my potential in the industry.
Title: Introduction to Data Visualization
Speaker: Mr Isma Redha from iCEP
From Industrial Talk 7, we have learnt about the Microsoft Power BI software and the Data Engineering field. The talk was delivered by Mr Isma Redha, a data engineer from iCEP. Mr Isma shared a lot of tips on how to strive in data engineering studies and also the criteria and rules to be a data engineer. He also highlighted some tips to use Power BI efficiently such as don't use more than 6 colors in a single layout because it will distract the user to read the data, use formal fonts and elements and do not use high contrast color combinations on the chart. Though business intelligence products tend to be mainly the remit of business analysts and data scientists, thanks to its user-friendly nature, Power BI can be used by a range of people within a business. Power BI can generate custom dashboards depending on what data is relevant, and what information we need access to. Power BI also works with whatever data we tell it to, so we can report on pretty much anything. It’s often most popular with departments like finance, marketing, sales, human resources, it, and operations. However, not many firms have the capacity or need to support a full-time data engineer, therefore Power BI is frequently used as a self-service tool by various departments of the business to check on progress and gain insight into their team's performance.
Title: 5G and WIFI6
Panel: Mr. Nicholas Yong (Huawei)
From the industrial talk 6, delivered by Mr. Nicholas Yong (Executive Industry Solution Manager from Huawei), we gained a lot of new information about the evolution of the network infrastructure like 5G and WiFi 6. 5G offers enormous prospects for the economy and all members of society, including consumers, residences, enterprises, and communities. Many services will be democratised as a result of the possible savings and increased efficiency of new technology. All populations will benefit from increased access to information and education as a result of connectivity. New business opportunities in a variety of industries will increase investment and employment. 5G will contribute to the long-term goal of decreasing our carbon footprint and conserving natural resources by broadening the scope of wireless technologies and making devices more autonomous. 5G communications technology will be more inclusive, progressive, proven, and powerful than any prior generation of communication technology. WiFi 6 offers the increase of battery life for devices accessing a WiFi 6 network. This improvement is beneficial to the average user and can be a critical factor in enabling low-power devices that comprise the Internet of Things (IoT) to make use of WiFi communication. Improvements in Multiple In/Multiple Out (MIMO) capabilities now allow a router with multiple antennas to both send and receive data transmissions from multiple devices at the same time. WiFi 5 could only send, but not receive multiple signals at once. This will lead to better performance in situations where many users are attempting to access the network. These are some of the ways that WiFi 6 alleviates the problems of accessing WiFi in congested settings such as sports arenas or entertainment venues. It promises to make connecting to wireless networks more efficient no matter where you are located.
Title: Smart Campus: The Journey Starts Here
Panel: Mr. Goh Bih Der (Commscope Malaysia)
At the first minutes of the talk, Mr Goh shared with us the situation of the world for the last 24-months with covid-19. Everyone living their daily lives virtually using their devices . Almost everything is held virtually using the internet including the meetings, bills payment and to keep in touch with the relatives . With the emerging technology in network infrastructure , we are able to achieve convenience in life . for instance in the campus , we have smart lighting , environment air monitoring , automated lock and much more . Those innovations are undeniable useful for the students to do the experiments and research . All the technology expressing the phrase ‘Smart Campus’ and networking those tech is beyond important. Therefore , to make sure the networking is working well, we need a solid network infrastructure. Some employers might have their company lost out on business because of unforeseen problems with their network connectivity. To easily maintain the network ,we have to build the more automated, self-optimizing and healing network environments . this is because to obtain the optimised and secured network infrastructure, it is time consuming and it is not an easy task . we need to gain more knowledge and to master this field to improve the existing network infrastructure technologies for the next generation.
Title: Current Trends of Augmented Reality in Industry
Panel: Dr Ruzimi Mohamed (OZEL Sdn. Bhd.)
From this industrial talk, we realized that augmented reality technologies are very significant in todays life . By detecting hazards while driving , AR can reduce the number of accidents on the road . Customers can shop with satisfaction without putting their health at risk by using smart mirror instead of fitting room. Newbies surgeon can enhanced their surgical skills by practising using AR technology hence , saving more lives . Using AR lense technology, we can obtain any information in our surroundings. With the cited examples , it is crystal clear that AR helps human life become more convenience and valuable . Dr Ruzimi did mention that he only has seven people working things out under his company because OZEL Sdn Bhd value the quality despite the quantity . With that being said , he admitted that he willing to pay his employee the amount of money that they demand with the condition that they have the quality that OZEL required . This shown that if you have the expertise , it is easier for you to mark your place in the worklife . The youngsters nowadays have to get themselves involved with the current and upcoming technologies regarding industry revolution 4.0 so that we can identify our best interest and start shaping the pathway to master the chosen industry . With the growing numbers of graduates that have a good skills in their field , our nations will be having the new upcoming technologies that absolutely accelerates the development of humans' life .
Title: Amazon Web Services Could Computing
Panel: Dr Qusay Al-Maatouk (APU)
The talk was given by Dr. Qusay Al-Maatouk, a lecturer from the Asia Pacific University. The talk was basically about how cloud computing is really like and from what Dr. Qusay said, it is a computing utility which provides the basic needs for an individual who needs more than just a computer for instance, a database, network and so on. I also learnt that AWS services were used worldwide and some of them were from large companies such as McDonalds a huge fast-food company. He also shared that cloud computing has much more benefits compared to traditional computing since from what he shared, traditional computing has many flaws because of staff problems and cloud computing don't need a lot of staffs compared to traditional computing which makes it easier to use and much more hassle-free. He also talked about AWS which stands for Amazon Web Services and there were three ways on how to interact with AWS which were, AWS Management Console, Command Line Interface and Software Development Kits. There were also lots of services provided by AWS and one of them was AWS Pricing Calculator which lets you calculate your monthly costs. All in all, the broadened my horizon on understanding what does cloud computing do. Cloud Computing is helping a lot in business whether it is a small or large. These Cloud Service Providers companies provide storage database server networking and the software through which the business can increase. Cloud computing allows a business to cut their operational and fixed monthly costs of hardware, databases, servers and software licenses. Cloud servers and data centers are managed by the cloud service provided. Therefore, there is no need for employee management. Eventually, it will reduce the need for IT resources, including people.
Tittle : Technology Information System & 4.0th Industrial Revolution
Panel: Ms. Sarah Khadijah Taylor (CyberSecurity Malaysia)
The first half of the talk was filled with a brief explanation and introduction of the industrial revolution such as industry 4.0 transformation drivers and the enabling technologies. This personally gives us a wider and clear view about Malaysia's process to move to IR4.0. After that, she gave a brief explanation regarding Malaysia Readiness for IR4.0, the plan and strategies and also the issues and challenges to transform the whole nation into a brighter future. This section was meant to show how much was the current world and Malaysia’s progress and how we as the younger generation can contribute and improvise what we have nowadays. Other than that, Ms Sarah also shared her experience as Strategic and Project Manager under Digital Forensics Department Cybersecurity Malaysia. Based on Ms.Sarah's experience, she also found out there will be a lot of potential new jobs that are related to IT, especially in the Cybersecurity Field. Some of them were IT security specialist, information security analyst, network security engineer and security engineer. The talk ended with some words of encouragement from Ms.Sarah. She mentioned that it is important for us especially as the students to explore and learn new things within our course. She also highlighted the importance of marketing ourselves to achieve our dream job. From the industrial talk, we have gained a lot of new information about the ongoing process and strategies of Malaysia towards Industrial Revolution 4.0. Things like cybersecurity, big data, autonomous robots and data systems can make our lives easier and more efficient. Firstly for example, implementation of cybersecurity is important because it protects all categories of data from theft and damage. This includes sensitive data, protected health information (PHI), personal information, intellectual property, governmental and industry information. Having advanced cyber defense programs and mechanisms in place to protect this data is crucial. Everyone in society relies on critical infrastructure such as hospitals and other healthcare institutions and financial service programs. We all rely on the safety of our data and personal information. For example, when logging into an application or when filling in more sensitive data in digital healthcare systems. If these systems, networks, and infrastructures don't have the right protection in place, our data might fall into the wrong hands.
Technology Information System & 4.0th Industrial Revolution
Panel: Mr. Nazri Edham (TM)
From this industrial talk, I have learnt a lot about IoT in IR4.0 scope. Until recently, Internet access was confined to devices such as a PC, tablet, or smartphone, but now, thanks to the Internet of Things, virtually any item can be connected to the Internet and remotely monitored. The Internet of Things (IoT) is a system of interconnected devices that use the internet to send and receive data. To ensure we know the position of items we buy online, turn off lights we forgot to turn off when leaving the house without driving back home, control the garden humidity and temperature without spending a lot of time in the garden, we need advances in IR4.0 that involve expertise in the field of IoT. IoT not only makes life easier in our premises by automating routine chores such as temperature and lighting adjustments, but it can also benefit the environment by ensuring that energy is only used when and where it is needed. TM also introduced SWIMS (smart water integrated management system) which one of the benefits is to reduces water wastage in the country. IoT could also assist people in tracking their own health through wearable devices, providing them more control over their life and health management. With that being said , we are actually enjoying the technologies without leaving environment and health issues behind .
Introduction
AWS Academy Cloud Foundations is intended for students who seek an overall understanding of cloud computing concepts, independent of specific technical roles. It provides a detailed overview of cloud concepts, AWS core services, security, architecture, pricing, and support. This course helps me to prepare for the AWS Certified Cloud Practitioner exam.
After getting AWS Academy Cloud Foundations Badges, I was should be able to fulfil the objectives of the course:
- Define the AWS Cloud.
I have achieved this objective in module 1. Cloud computing delivers computational power, database, storage, apps, and other IT resources on-demand through the internet. These materials are hosted on servers in big data centres situated all around the world. When you utilize a cloud service like AWS, the service provider controls your machines. These materials may be used as building blocks to create solutions that fulfil both business and technical needs. For various customers, multiple service models and deployment methodologies have arisen as cloud computing has increased in popularity. There are several distinct cloud service models and deployment strategies available. This knowledge can help you choose the correct cloud service model and deployment strategy for your needs. Cloud computing is the on-demand delivery of IT resources via the internet with pay-as-you-go pricing. Cloud computing enables you to think of (and use) your infrastructure as software. There are many AWS service analogs for the traditional, on-premises IT space. Amazon Web Services (AWS) is a secure cloud platform with a worldwide cloud product portfolio. This means you have on-demand access to computing, storage, network, database and other IT resources—and the tools to manage them. You can instantly create AWS resources. You can use the materials in minutes. AWS is adaptable. Your AWS environment may be dynamically scaled up or down to suit consumption trends and save money, or shut down temporarily or permanently. AWS billing becomes an operating expenditure rather than a capital one. AWS services can accommodate nearly any application or demand. You may use these services as building pieces to create complex, scalable solutions that you can adapt as your needs evolve.
From this module 1, I managed to get 80/100 for the grades.
- Explain the AWS pricing philosophy.
I have achieved this objective in module 2. AWS costs are driven by computing, storage, and outbound data transport. These features change based on the AWS product and price model. In most circumstances, data transport between AWS services within the same AWS Region is free. There are certain exceptions, so verify data transmission rates before using AWS. Outbound data transmission is summed across services and paid at that rate. AWS Data Transfer Out shows on your monthly account. This is the AWS pricing philosophy. While AWS's service offerings have expanded tremendously, our pricing strategy has not. Every month, you pay for what you use. You can change your mind at any moment. NO LONG TERM CONTRACTS AWS provides cloud computing services. You just pay for the resources you use. It includes: Pay for what you use, pay less when you reserve, pay less when you use more and pay even less as AWS grows. Assisting new AWS users with free usage for up to a year (AWS Free Tier). Certain AWS services and features are free. To get started, new AWS customers get a year-long free Amazon Elastic Compute Cloud (Amazon EC2) T2 micro instance with free usage of Amazon S3, Amazon EBS, Elastic Load Balancing, AWS data transfer, and other AWS services. While these services are free, other AWS services used alongside these services may incur costs. For example, when you automatically scale up EC2 instances, you will be charged. The best way to estimate costs is to examine the fundamental characteristics for each AWS service, estimate your usage for each characteristic, and then map that usage to the prices that are posted on the AWS website. The service pricing strategy gives you the flexibility to choose the services that you need for each project and to pay only for what you use.
From this module 2, I managed to get 80/100 for the grades.
- Identify the global infrastructure components of AWS.
I have achieved this objective in module 3. Regions are the backbone of AWS Cloud. AWS has 22 regions. AWS Regions are physical locations with several Availability Zones. Each Availability Zone has one or more data centres. Regions are segregated to achieve fault tolerance and stability. Resources from one Region are not immediately reproduced. Data stored in a Region is not copied outside that Region. If your company requires it, you must duplicate data across Regions. Availability Zones are separate areas inside each AWS Region. Each Availability Zone allows for more highly available, fault-tolerant, and scalable applications and databases than a single data centre. Each Availability Zone has numerous data centres (usually three) and hundreds of thousands of servers. They're AWS Global Infrastructure partitions. Availability Zones have their own power infrastructure and are geographically isolated from each other by several kilometres, however they are all within 100 km of each other. The AWS architecture is built on data centres. Customers don't designate a data centre for resource deployment. It is the highest detailed degree of specification a client may make. But the data is stored in a data centre. Amazon has modern, high-availability data centres. The availability of instances in the same area might be affected. If you host all your instances in one location and it fails, none of your instances will work. The AWS Global Infrastructure has several valuable features. First, it is elastic and scalable. This means resources can dynamically adjust to increases or decreases in capacity requirements. It can also rapidly adjust to accommodate growth. Second, this infrastructure is fault tolerant, which means it has built-in component redundancy which enables it to continue operations despite a failed component. Finally, it requires minimal to no human intervention, while providing high availability with minimal down time.
From this module 3, I managed to get 80/100 for the grades.
- Describe security and compliance measures of the AWS Cloud including AWS Identity and Access Management (IAM).
I have achieved this objective in module 4. AWS and the customer share responsibilities for security and compliance. This shared responsibility paradigm is intended to benefit customers. However, in order to provide the flexibility and control required to implement client solutions on AWS, the customer is still responsible for some elements of security. The distinction between who is accountable for security “of” and “in” the cloud. AWS operates, administers, and regulates all components, from software virtualization to physical security of AWS facilities. AWS is in charge of securing the infrastructure that powers all AWS Cloud services. The AWS Cloud services run on hardware, software, networking, and facilities. The client is responsible for data encryption in transit and at rest. The customer should also secure the network and maintain security credentials and logins safely. The client is also responsible for configuring security groups and the operating system on computer instances launched by them (including updates and security patches). It maintains and regulates components from the bare metal host operating system to hypervisor virtualization layer security and physical security of the facilities where the services run. It implies AWS is responsible for securing the worldwide infrastructure that runs all AWS Cloud services. AWS Regions, Availability Zones, and Edge Locations make up the worldwide infrastructure. AWS Identity and Access Management (IAM) controls access to compute, storage, database, and application services. This allows you to control which users may access which services. Authentication is a fundamental computer security concept. Consider how you authenticate yourself when you travel to the airport to catch your trip. Before you may access a restricted location, you must show a security official proof of your identity. Getting access to AWS resources in the cloud works similarly. When creating an IAM user, you choose the sort of access the user has to AWS resources. Users can be granted programmatic or AWS Management Console access. You can provide programmatic, console, or both forms of access. If you allow programmatic access, the IAM user must provide an access key ID and a secret access key when calling the AWS API through the AWS CLI, SDK, or other development tool. If you allow AWS Management Console access, the IAM user must complete out the browser login forms. The user must provide either the 12-digit account ID or the alias. Username and password are required. If the user has activated MFA, they will be asked for an authentication code.
From this module 4, I managed to get 90/100 for the grades.
- Create an AWS Virtual Private Cloud (Amazon VPC).
I have achieved this objective in module 5. Virtual Private Cloud (VPC) is a service provided by Amazon Web Services (AWS) that enables me to create logically separated sections of the AWS Cloud where you may deploy AWS resources. Amazon VPC allows you to choose your own IP address range, create subnets, and configure route tables and network gateways. Your VPC can utilise both IPv4 and IPv6 for secure resource and application access. You may also configure your VPC's network. For example, you can provide your web servers access to the public internet. Backend systems (such databases or application servers) can be placed on a private subnet. For each Amazon Elastic Compute Cloud (Amazon EC2) subnet, you may employ various layers of protection, including security groups and network access control lists (network ACLs). Amazon VPC lets you set up virtual private clouds (VPCs). A VPC is a logically isolated virtual network in the AWS Cloud. Your own VPC. VPCs can span several AWS Regions and Availability Zones. After I construct a VPC, you may subdivide it. A subnet is a VPC's IP address range. Subnets share an Availability Zone. So I built subnets in various Availability Zones. Subnets can be public or private. Private subnets do not have direct internet connectivity.
From this module 5, I managed to get 90/100 for the grades.
- Demonstrate when to use Amazon Elastic Compute Cloud (EC2), AWS Lambda and AWS Elastic Beanstalk.
I have achieved this objective in module 6. Amazon EC2 delivers virtual computers, or IaaS. (IaaS). IaaS services provide you freedom and let you control your own servers. You pick the operating system, as well as the server's size and resource capabilities. Virtual machines are recognisable to IT experts who have worked with on-premises computing. Amazon EC2 was one of AWS's earliest services and is now one of its most popular. AWS Lambda is a self-service computing platform. No server deployment or management is required with AWS Lambda. You just pay for the computer time used. Many IT workers are unfamiliar with serverless technologies. Cloud-native designs provide tremendous scalability at a cheaper cost than hosting servers 24/7 to service the same workloads. Container-based services like AWS Elastic Container Service, AWS Elastic Kubernetes Service, AWS Fargate, and AWS Elastic Container Registry let you execute different workloads on a single OS (OS). Containers are more responsive than virtual machines. Container-based solutions remain popular. Finally, AWS Elastic Beanstalk delivers PaaS. (PaaS). It enables rapid application deployment by offering all required application services. This frees you up to focus on your application code rather than managing the OS and application server. AWS offers a variety of computing services to suit various use cases. The best computing service to employ depends on your use case. Legacy code often dictates the computational architecture you select. No one says you can't develop the architecture to use proven cloud-native concepts. Amazon EC2 delivers cloud-based virtual computers with complete administrative control over the Windows or Linux operating system. Windows 2008, 2012, 2016, and 2019 are supported, as are Red Hat, SuSE, Ubuntu, and Amazon Linux. To distinguish it from the host operating system, a guest operating system runs on a virtual computer. The host OS is installed directly on any server hardware that hosts virtual machines.
From this module 6, I managed to get 80/100 for the grades.
- Differentiate between Amazon S3, Amazon EBS, Amazon EFS and Amazon S3 Glacier.
I have achieved this objective in module 7. Storage is an AWS essential service. Instance stores (ephemeral storage), Amazon EBS, Amazon EFS, Amazon S3, and Amazon S3 Glacier are some examples. Instance store is temporary storage added to your Amazon EC2 instance. As a device on an Amazon EC2 instance, Amazon EBS provides persistent, mountable storage. Amazon EBS can only be deployed inside the same Availability Zone. Amazon EC2 instances cannot share an Amazon EBS volume. Instances of Amazon EC2 can share a file system (Amazon EFS). Amazon S3 is permanent storage where each file becomes an object accessible via a Uniform Resource Locator (URL). Amazon S3 Glacier is for archiving data (for example, when you need long-term data storage for archival or compliance reasons). It offers permanent block storage for Amazon EC2 instances. Disks and other devices that store data persistently when power is removed. It's also known as nonvolatile storage. Availability Zone replication protects you from component failures in Amazon EBS volumes. It is built to last. Amazon EBS volumes deliver reliable, low-latency performance for your workloads. With Amazon EBS, you pay only for what you need and can scale up or down in minutes. Amazon S3 is a managed cloud storage service built for scalability and reliability. A bucket may hold a nearly unlimited number of items, and you can write, read, and remove them. Bucket names must be unique across all Amazon S3 buckets. Objects can reach 5 TB. Amazon S3 stores data redundantly across many locations and devices in each site. Your data is not tied to any one server, and you don't need to manage any infrastructure. You can upload as many items as you wish. Trillions of items and millions of requests each second. Simple, scalable, elastic file storage for AWS services and on-premises resources. A simple interface allows you to rapidly establish and configure file systems. Amazon EFS is designed to expand and shrink automatically as you add and remove data without affecting applications. It is designed to provide your applications with the storage they require. Because Amazon S3 Glacier stores data for a longer period of time than Amazon S3, you cannot access it instantly when you need it.
From this module 7, I managed to get 100/100 for the grades.
- Demonstrate when to use AWS Database services including Amazon Relational Database Service (RDS), Amazon DynamoDB, Amazon Redshift, and Amazon Aurora.
I have achieved this objective in module 8. Cloud-based relational database management is provided by Amazon RDS. AWS provides a service that sets up, manages, and grows a relational database without any ongoing administration. Amazon RDS delivers scalable capacity while automating administrative processes. With Amazon RDS, you can focus on your application, giving it the performance, high availability, security, and compatibility it deserves. With Amazon RDS, you can concentrate on your data and application. If your database is on-premises, the database administrator is in charge. Database administration activities include optimising applications and queries, configuring hardware, patching hardware, configuring networking and power, and controlling HVAC (HVAC). Moving to an Amazon Elastic Compute Cloud (Amazon EC2) database eliminates the need to maintain underlying hardware or data centre operations. But you're still in charge of updating the OS and managing software and backups. Setting up your database on Amazon RDS or Aurora reduces administrative duties. Moving to the cloud allows you to automate database scaling, high availability, backups, and patching. So you can concentrate on what matters most—optimizing your app. DynamoDB is a fast and versatile NoSQL database service for applications that require constant low latency. As part of the fault-tolerant design, Amazon controls the underlying data infrastructure for this service. It lets you construct tables and objects. A table can be expanded. To fulfil workload needs, the system automatically splits data. A table's storage capacity is practically unlimited. Some clients have production tables with billions of items. A NoSQL database allows entries in the same table to have diverse properties. This allows you to add properties as your programme matures. You can store newer and older format items in the same table without doing schema migrations. Businesses need analytics now, but constructing a data warehouse is difficult and costly. Setting up a data warehouse might take months and money. Amazon Redshift is a fully managed data warehouse that is easy to set up, utilise, and scale. Utilizing advanced query optimization, columnar storage on high-performance local discs, and massively parallel data processing, it allows you to conduct complicated analytic queries on petabytes of structured data. They return in seconds.
From this module 8, I managed to get 80/100 for the grades.
- Explain AWS Cloud architectural principles.
I have achieved this objective in module 9. Your cloud applications and workloads will benefit greatly from the AWS Well-Architected Framework. You may use it to analyse and implement your cloud designs. This framework was created by AWS after examining hundreds of client architectures. There are design principles and best practices for each pillar. Each best practice has a set of foundational questions. Each question has some context and a list of best practices. Snap, fly. Currently, many equipment (cameras and video cameras) are placed on light aircraft to collect footage of large cities and iconic landmarks. Each gadget creates picture elements that are time-stamped to the aircraft's clock. Onboard Capture equipment with external storage array streams the images assets. The Capture machine continually gathers navigation data such as GPS, compass readings, and elevation. A-Z Customers may examine photographs and videos of the product on the Any Company website. The photos come in many forms (for example, a large-scale, walk-around map). The site employs Elastic Load Balancing with HTTPS and an Auto Scaling group of EC2 machines running a content management system. An S3 bucket holds static website assets. Produce and Transport Any Company uses proprietary technology to create 3D models from images and video (extracting structure from motion). Render is a fleet of g2.2xlarges. The Render service creates 3D models from the Production queue and stores them in an S3 bucket. For clients to preview their purchases on the Any Company website, the Render service employs 3D models. These videos are in a different S3 bucket. An outdated preview is deleted every year. However, models are maintained for future projects. Organization, prepare, operate, and develop are the best practices for operational excellence. Operations teams must understand business and customer demands to enable business results efficiently. Operations teams develop and apply procedures to respond to operational events and support business needs. Operations teams collect metrics to assess business outcomes. It's critical to build operations that adapt to changing company context, objectives, and consumer demands.
From this module 9, I managed to get 80/100 for the grades.
- Explore key concepts related to Elastic Load Balancing (ELB), Amazon Cloud Watch, and Auto Scaling.
I have achieved this objective in module 10. Modern high-traffic websites must respond to hundreds of thousands, if not millions, of simultaneous user or client requests for text, photos, video, or application data. Generally, more servers are needed to handle big traffic. Application or network traffic is distributed among various destinations (Amazon EC2, containers, IP addresses, and Lambda functions) inside or across Availability Zones. Elastic Load Balancing adapts to changing traffic to your application. It can scale to most workloads. A load balancer takes client traffic and directs it to targets (like EC2 instances) in one or more Availability Zones. Adding listeners to your load balancer allows you to accept incoming traffic. A listener monitors connection requests. It specifies the protocol and port for client connections to the load balancer. It also has a protocol and port number for the load balancer to connect to the targets. You may also set your load balancer to do health checks, which monitor the targets' health and only transmit requests to healthy ones. When a load balancer identifies a bad target, it blocks traffic to that destination. When it determines that the target is healthy again, it restarts traffic routing to it. The load balancer types are set differently. With Application and Network Load Balancers, you create target groups and route traffic to them. With classic load balancers, you register instances. For DevOps engineers, developers, SREs, and IT managers, Amazon CloudWatch provides monitoring and observability. CloudWatch continuously checks your AWS resources (and your AWS apps). CloudWatch can gather and track metrics, or variables, for your resources and applications. You may use an alarm to automatically send a notice to an Amazon Simple Notification Service (Amazon SNS) subject or conduct an Amazon EC2 Auto Scaling or Amazon EC2 action. Examples of warnings include EC2 CPU utilisation, ELB request delay, Amazon DynamoDB table throughput, Amazon SQS queue length, and AWS bill costs. You may also set custom alarms for your apps or infrastructure. Incoming events (or changes in your AWS environment) can also be sent to destinations for processing using Amazon CloudWatch Events. There are a number of built-in targets as well as Amazon EC2 instances and AWS Lambda functions. CloudWatch Events is notified of operational changes. CloudWatch Events responds to operational changes by delivering notifications, activating functions, making modifications, and recording state information. Because computer power is a programmable resource, scalability in the cloud is flexible. Amazon EC2 Auto Scaling is an AWS tool that lets you manage application availability by adding or removing EC2 instances based on your rules. EC2 Auto Scaling's fleet management tools can help you keep your fleet healthy and available. Amazon EC2 Auto Scaling offers numerous options to tailor scaling to your applications' needs. You may add or delete EC2 instances manually, on a schedule, or use AWS Auto Scaling for predictive scaling. Together, dynamic and predictive scaling can speed up scaling.
From this module 10, I managed to get 80/100 for the grades.