HomeQuizzes & SurveysAWS (SAA-C02) Practice Test 3 AWS (SAA-C02) Practice Test 3 Leave a Comment / By user / November 1, 2021 Welcome to your AWS (SAA-C02) Practice Test 3 Exam Instructions The exam comprises of the following types of questions: - Multiple Choice Single Response - Multiple Choice Multiple Response There is no negative marking. Name Email Mobile Numver 1. A solutions architect is designing a solution to access a catalog of images and provide users with the ability to submit requests to customize images. Image customization parameters will be in any request sent to an AWS API Gateway API. The customized image will be generated on demand, and users will receive a link they can click to view or download their customized image. The solution must be highly available for viewing and customizing images What is the MOST cost-effective solution to meet these requirements? A. Use Amazon EC2 instances to manipulate the original image into the requested customization. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances. B. Use AWS Lambda to manipulate the original image to the requested customization. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the ongin C. Use AWS Lambda to manipulate the original image to the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances. D. Use Amazon EC2 instances to manipulate the original image into the requested customization. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. None 2. A company's web application is using multiple Linux Amazon EC2 instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure and to provide storage that complies with atomicity, consistency, isolation, and durability (ACID) What should a solutions architect do to meet these requirements? A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance. B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance. C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance. D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). None 3. A client reports that they want see an audit log of any changes made to AWS resources in their account. What can the client do to achieve this? A. Set up Amazon CloudWatch monitors on services they own B. Enable AWS CloudTrail logs to be delivered to an Amazon S3 bucke C. Use Amazon CloudWatch Events to parse logs D. Use AWS OpsWorks to manage their resources None 4. An application generates audit logs of operational activities. Compliance requirements mandate that the application retain the logs for 5 years. How can these requirements be met? A. Save the logs in an Amazon S3 bucket and enable Multi-Factor Authentication Delete (MFA Delete) on the bucket B. Save the logs in an Amazon EFS volume and use Network File System version 4 (NFSv4) locking with the volume C. Save the logs in an Amazon Glacier vault and use the Vault Lock feature. D. Save the logs in an Amazon EBS volume and take monthly snapshots. None 5. A solutions architect is designing a solution where users will De directed to a backup static error page it the primary website is unavailable. The primary website's DNS records are hosted in Amazon Route 53 where their domain is pointing to an Application Load Balancer (ALB). Which configuration should the solutions architect use to meet the company's needs while minimizing changes and infrastructure overhead? A. Point a Route 53 alias record to an Amazon CloudFront distribution with the ALB as one of its origins. Then, create custom error pages for the distribution. B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page hosted within an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy. C. Update the Route 53 record to use a latency-based routing policy. Add the backup static error page hosted within an Amazon S3 bucket to the record so the traffic is sent to the most responsive endpoints. D. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance hosting a static error page as endpoints. Route 53 will only send requests to the instance if the health checks fail for the ALB None 6. A company's website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content Users around the globe are reporting that the website is slow. Which set of actions will improve website performance for users worldwide? A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution. B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB. C. Launch nev. EC2 instances hosting the same web application in different Regions closer to the users. Then register the instances with the same ALB using cross-Region VPC peering D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB and EC2 instances. Then update an Amazon Route 53 record to point to the S3 buckets. None 7. A product team is creating a new application that will store a large amount of data. The data will be analyzed hourly and modified by multiple Amazon EC2 Linux instances. The application team believes the amount of space needed will continue to grow for the next 6 months. Which set of actions should a solutions architect take to support these needs? A. Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances B. Store the data in an Amazon EFS file system. Mount the file system on the application instances. C. Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances D. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Update the bucket policy to allow access to the application instances. None 8. A user has created an EBS volume with 1000 IOPS. What is the average IOPS that the user will get for most of the year as per EC2 SLA if the instance is attached to the EBS optimized instance? A. 950 B. 990 C. 1000 D. 900 None 9. A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions the chief information officer (CIO) wants to block access for certain countries. Which action will meet these requirements? A. Modify the ALB security group to deny incoming traffic from blocked countries B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries C. Use Amazon CloudFront to serve the application and deny access to blocked countries D. Use ALB listener rules to return access denied responses to incoming traffic from blocked None 10. A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet Security is a high priority for the company. How should security groups be configured in this situation? (Select TWO) A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/70 B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0 C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier 11. Your EBS volumes do not seem to be performing as expected and your team leader has requested you look into improving their performance. Which of the following is not a true statement relating to the performance of your EBS volumes? A. Frequent snapshots provide a higher level of data durability and they will not degrade the performance of your application while the snapshot is in progress. B. General Purpose (SSD) and Provisioned IOPS (SSD) volumes have a throughput limit of 128 MB/s per volume C. There is a relationship between the maximum performance of your EBS volumes, the amount of I/O you are driving to them, and the amount of time it takes for each transaction to complete D. There is a 5 to 50 percent reduction in IOPS when you first access each block of data on a newly created or restored EBS volume None 12. A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an S3 bucket. Upon payment, content will be available for download for 14 days before the user is denied access. Which of the following would be the LEAST complicated implementation? A. Use an Amazon CloudFront distribution with an origin access identity (OAI) Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs Design a Lambda function to remove data that is older than 14 days B. Use an S3 bucket and provide direct access to the tile Design the application to track purchases in a DynamoDH table Configure a Lambda function to remove data that is older than 14 days based on a query to Amazon DynamoDB C. Use an Amazon CloudFront distribution with an OAI Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs Design the application to sot an expiration of 14 days for the URL D. Use an Amazon CloudFront distribution with an OAI Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs Design the application to set an expiration of 60 minutes for the URL and recreate the URL as necessary None 13. A solutions architect is designing storage for a high performance computing (HPC) environment based on Amazon Linux. The workload stores and processes a large amount of engineering drawings that require shared storage and heavy computing. Which storage option would be the optimal solution? A. Amazon Elastic File System (Amazon EFS) B. Amazon FSx for Lustre C. Amazon EC2 instance store D. Amazon EBS Provisioned IOPS SSD (io1) None 14. An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs. Which solution is the MOST cost-effective? A. DEV with Spot Instances and PROD with On-Demand Instances B. DEV with On-Demand Instances and PROD with Spot Instances C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances D. DEV with On-Demand Instances and PROD with Scheduled Reserved Instances None 15. A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control. Which solution will satisfy these requirements? A. Configure Amazon EFS storage and set the Active Directory domain for authentication. B. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones. C. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume. D. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication. None 16. A three-tier web application processes orders from customers. The web tier consists of Amazon EC2 instances behind an Application Load Balancer, a middle tier of three EC2 instances decoupled from the web tier using Amazon SQS. and an Amazon DynamoDB backend. At peak times, customers who submit orders using the site have to wait much longer than normal to receive confirmations due to lengthy processing times. A solutions architect needs to reduce these processing times. Which action will be MOST effective in accomplishing this? A. Replace the SQS queue with Amazon Kinesis Data Firehose. B. Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier. C. Add an Amazon CloudFront distribution to cache the responses for the web tier D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SOS queue depth. None 17. A company running an on-premises application is migrating the application to AWS to increase its elasticity and availability. The current architecture uses a Microsoft SQL Server database with heavy read activity. The company wants to explore alternate database options and migrate database engines, if needed. Every 4 hours, the development team does a full copy of the production database to populate a test database. During this period, users experience latency. What should a solution architect recommend as replacement database? A. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database. B. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database. C. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database. D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database. None 18. A solution architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user's on-premises networkattached storage (NAS). The solution architected has proposed migrating the IIS web servers Which replacement to the on-promises filo share is MOST resilient and durable? A. Migrate the file Share to Amazon RDS. B. Migrate the tile Share to AWS Storage Gateway C. Migrate the file Share to Amazon FSx dor Windows File Server D. Migrate the tile share to Amazon Elastic File System (Amazon EFS) None 19. A monolithic application was recently migrated to AWS and is now running on a single Amazon EC2 instance. Due to application limitations, it is not possible to use automatic scaling to scale out the application. The chief technology officer (CTO) wants an automated solution to restore the EC2 instance in the unlikely event the underlying hardware fails. What would allow for automatic recovery of the EC2 instance as quickly as possible? A. Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance if it becomes impaired B. Configure an Amazon CloudWatch alarm to trigger an SNS message that alerts the CTO when the EC2 instance is impaired. C. Configure AWS CloudTrail to monitor the health of the EC2 instance, and if it becomes impaired, triggered instance recovery D. Configure an Amazon EventBridge event to trigger an AWS Lambda function once an hour that checks the health of the EC2 instance and triggers instance recovery if the EC2 instance is unhealthy. None 20. A company has several business systems that require access to data stored in a file share. the business systems will access the file share using the Server Message Block (SMB) protocol. The file share solution should be accessible from both of the company's legacy on-premises environment and with AWS. Which services mod the business requirements? (Select TWO.) A. Amazon EBS B. Amazon EFS C. Amazon FSx for Windows D. Amazon S3 E. AWS Storage Gateway file gateway 21. A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database Compliance regulations mandate that all personally identifiable information (Pll) be encrypted at rest. Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure'' A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume B. Deploy AWS CloudHSM. generate encryption keys, and use the customer master key (CMK) to encrypt database volumes. C. Configure SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt database volumes D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes. None 22. A company has an application that posts messages to Amazon SQS Another application polls the queue and processes the messages in an l/O-intensive operation. The company has a service level agreement (SLA) that specifies the maximum amount of time that can elapse between receiving the messages and responding to the users. Due to an increase in the number of messages the company has difficulty meeting its SLA consistently. What should a solutions architect do to help improve the application's processing time and ensure it can handle the load at any level? A. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size. B. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance C. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep us aggregate CPU utilization below 70% D. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue. None 23. A company is deploying a multi-instance application within AWS that requires minimal latency between the instances. What should a solutions architect recommend? A. Use an Auto Scaling group with a cluster placement group. B. Use an Auto Scaling group with single Availability Zone in the same AWS Region. C. Use an Auto Scaling group with multiple Availability Zones in the same AWS Region. D. Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets None 24. A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly. What is the MOST cost-effective method to establish this type of connection? A. Implement a client VPN B. Implement AWS Direct Connect C. Implement a bastion host on Amazon EC2 53D. D. Implement an AWS Site-to-Site VPN connection None 25. A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video files has become popular and a large number of user across the world are accessing this content. This has resulted in a cost increase. Which action will DECREASE cost without compromising user accessibility? A. Change the EBS volume to provisioned IOPS (PIOPS) B. Store the video in an Amazon S3 bucket and create and Amazon CloudFront distribution C. Split the video into multiple, smaller segments so users are routed to the requested video segments only D. Create an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket None 26. A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability. Which storage solution meets these requirements? A. Amazon S3 B. Amazon S3 Intelligent-Tiering C. Amazon S3 Glacier Deep Archive D. Amazon S3 One Zone-Infequent Access (S3 One Zone-IA None 27. A mobile gaming company runs apllication servers on Amazon EC2 instances. The servers reciev updates from players every 15 minutes. The mobile game creates a JSON object of the progress made in the game since the last update, and sends the JSON object an Application Load Balacer. As the mobile game is played, game updates are being lost. The company wants to create a durable way to get the updates in order. What should a solution architect recommend to decouple the system? A. Use Amazon Kinesis Data streams to capture the data and store the JSON object in Amazon S3 B. Use Amazon Kinesis Data Firehouse to capture the data and store the JSON object in Amzon S3 C. Use Amazon simple Queue service (Amzon SQS) FIFO queue to captur the data and EC2 instances to process the messages in the queue. D. Use Amazon simple Notification Service (Amazon SNS) to capture the data and EC2 instances to process the messages sent to Application Load balancer. None 28. A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code. Which solution meets these requirements? A. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option. B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases. D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances. None 29. A web application must persist order data to Amazon S3 to support near-real-time processing. A solutions architect needs create an architecture that is both scalable and fault tolerant. Which solutions meet these requirements? (Select TWO.) A. Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon B. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use the queue to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3 C. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3 D. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. E. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. 30. A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-realtime solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval. What should a solutions architect recommend to meet these requirements? A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications. B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3 C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream. D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3 None 31. A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys. What should a solutions architect use to accomplish this? A. Server-Side Encryption with keys stored in an S3 bucket B. Server-Side Encryption with Customer-Provided Keys (SSE-C) C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS) None 32. A company Is designing an internet-facing web application. The application runs on Amazon EC2 for Linux-based instances that store sensitive user data in Amazon RDS MySQL Multi-AZ DB instances. The EC2 instances are in public subnets, and the RDS DB instances are in private subnets. The security team has mandated that the DB instances be secured against web-based attacks. What should a solutions architect recommend? A. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Configure the EC2 instance iptables rules to drop suspicious web traffic. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. B. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Move DB instances to the same subnets that EC2 instances are located in. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances C. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group. D. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Configure the Auto Scaling group lo automatically create new DB instances under heavy traffic. Create a security group for the RDS DB instances. Configure the RDS security group to only allow port 3306 inbound. None 33. A company is running a media store across multiple Amazon EC2 instances distributed across multiple Availability Zones in a single VPC. The company wants a high-performing solution to share data between all the EC2 instances, and prefers to keep the data within the VPC only. What should a solutions architect recommend? A. Create an Amazon S3 bucket and call the service APIs from each instance's application B. Create an Amazon S3 bucket and configure all instances to access it as a mounted volume. C. Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all instances. D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all instances. None 34. A company stores 200 GB of data each month in Amazon S3. The company needs to perform analytics on this data at the end of each month to determine the number of items sold in each sales region for the previous month. Which analytics strategy is MOST cost-effective for the company to use? A. Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data by using Kibana B. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight C. Create an Amazon EMR cluster. Query the data by using Amazon EMR, and store the results in Amazon S3. Visualize the data in Amazon QuickSight D. Create an Amazon Redshift cluster. Query the data in Amazon Redshift, and upload the results to Amazon S3. Visualize the data in Amazon QuickSight. None 35. A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred. Which solution meets these requirements? A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball S3 endpoint to provide local access to the data B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide on-premises systems with local access to the data. C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data. None 36. A company is hosting 60 TB of production-level data in an Amazon S3 bucket. A solution architect needs to bring that data on premises for quarterly audit requirements. This export of data must be encrypted while in transit. The company has low network bandwidth in place between AWS and its on-premises data center. What should the solutions architect do to meet these requirements? A. Deploy AWS Migration Hub with 90-day replication windows for data transfer B. Deploy an AWS Storage Gateway volume gateway on AWS. Enable a 90-day replication window to transfer the data. C. Deploy Amazon Elastic File System (Amazon EFS), with lifecycle policies enabled, on AWS. Use it to transfer the data. D. Deploy an AWS Snowball device in the on-premises data center after completing an export job request in the AWS Snowball console. None 37. A company hosts historical weather records in Amazon S3. The records are downloaded from the company's website by a way of a URL that resolves to a domain name. Users all over the world access this content through subscriptions. A third-party provider hosts the company's root domain name, but the company recently migrated some of its services to Amazon Route 53. The company wants to consolidate contracts, reduce latency for users, and reduce costs related to serving the application to subscribers. Which solution meets these requirements? A. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the application's URL domain name B. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application's URL domain name C. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy. D. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy. None 38. A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations. The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The game's user base is increasing rapidly. What should a solutions architect do to improve the performance of the data tier? A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled. B. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis. None 39. A company is using Amazon DynamoDB to stage its product catalog which is 1 GB. Since a product entry on average consists of100 KB of data, and the average traffic is about 250 requests per second, the database administrator has provisioned 3.000 RCUs of read capacity throughput. However, some products are very popular and users are experiencing delays or timeouts due to throttling. What improvement offers a long-term solution to this problem? A. Increase the throughput provisioning to 6.000 read capacity units (RCUs) B. Use Amazon DynamoDB Accelerator to maintain the frequently read items C. Augment Amazon DynamoDB by storing only the key product attributes, with the details stored on Amazon S3 D. Change the partition key to consist of a hash of product key and product type instead of just the product key None 40. A solutions architect must provide a fully managed replacement for an on-premises solution that allows employees and partners to exchange files The solution must be easily accessible to employees connecting from on-premises systems, remote employees, and external partners Which solution meets these requirements? A. Use AWS Transfer for SFTP to transfer files into and out of Amazon S3 B. Use AWS Snowball Edge for local storage and large-scale data transfers C. Use Amazon FSx to store and transfer files to make them available remotely D. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3. None 41. A company runs a legacy application with a single-tier architecture on an Amazon EC2 instance Disk I/O is low. With occasional small spikes during business hours. The company requires the instance to be stopped from 8 PM to 8 AM daily. Which storage option is MOST appropriate for this workload? A. Amazon EC2 instance storage B. Amazon EBS General Purpose SSD (gp2) storage C. Amazon S3 D. Amazon EBS Provisioned IOPS SSD (io2) storage None 42. A company seeks a storage solution for its application. The solution must be highly available and scalable. The solution also must function as a file system, be mountable by multiple Linux instances in AWS and on premises through native protocols, and have no minimum size requirements. The company has set up a Site-to-Site VPN for access from its on-premises network to its VPC. Which storage solution meets these requirements? A. Amazon FSx Multi-AZ deployments B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes C. Amazon Elastic File System (Amazon EFS) with multiple mount targets D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points None 43. A company has a Microsoft NET application that runs on an on-premises Windows Server. The application stores data by using an Oracle Database Standard Edition server. The company is planning a migration to AWS and wants to minimize development changes while moving the application. The AWS application environment should be highly available. Which combination of actions should the company take to meet these requirements? (Select TWO ) A. Refactor the application as serverless with AWS Lambda functions running NET Core B. Rehost the application in AWS Elastic Beanstalk with the NET platform in a Multi-AZ deployment C. Replatform the application to run on Amazon EC2 with the Amazon Linux Amazon Machine Image (AMI) D. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Amazon DynamoDB in a Multi-AZ deployment E. Use AWS Database Migration Service (AWS DMS) to migrate from the Oracle database to Oracle on Amazon RDS in a Multi-AZ deployment 44. A company is making a prototype of the infrastructure for its new website by manually provisioning the necessary infrastructure. This infrastructure includes an Auto Scaling group an Application Load Balancer, and an Amazon RDS database. After the configuration has been thoroughly validated the company wants the capability to immediately deploy the infrastructure for development and production use in two Availability Zones in an automated fashion. What should a solutions architect recommend to meet these requirements? A. Use AWS Systems Manager to replicate and provision the prototype infrastructure in two Availability Zones B. Define the infrastructure as a template by using the prototype infrastructure as a guide Deploy the infrastructure with AWS CloudFormation C. Use AWS Config to record the inventory of resources that are used in the prototype infrastructure Use AWS Config to deploy the prototype infrastructure into two Availability Zones. D. Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype infrastructure to automatically deploy new environments in two Availability Zones None 45. A company has an application that servers clients that are deployed in more than 20,000 retail storefront locations around the world. The application consists of backend web services that are exposed over HTTPS on port 443. The application is hosted on Amazon EC2 instance behind an Application Load balancer (ALB). The retail locations communicate with the web applications over the public internet. The company allows each retail location to register the IP address that the retail location has been allocated by its local ISP. The company's security team recommends to increase the security of the application endpoint by restricting access to only the IP addresses registered by the retail locations. What should a solutions architect do to meet these requirements? A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update the IP addresses in the rule to include the registered IP addresses. B. Deploy AWS Firewall Manager to manage the ALB. Configure firewall rules to restrict traffic to the ALB. Modify the firewall rules to include the registered IP addresses. C. Store the IP addresses in an Amazon DynamicDB table. Configure an AWS Lambda authorization function on the ALB to validate that incoming requests are from the registered IP addresses D. Configure the network ACL on the subnet that contains the public interface of the ALB. Update the ingress rules on the network ACL with entries for each of the registered IP addresses. None 46. A company is running a publicly accessible serverless application that uses Amazon API Gateway and AWS Lambda. The application's traffic recently spiked due to fraudulent requests from botnets. Which steps should a solutions architect take to block requests from unauthorized users? (Select TWO.) A. Create a usage plan with an API key that is shared with genuine users only B. Integrate logic within the Lambda function to ignore the requests from fraudulent addresses C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out. D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new API endpoint. E. Create an IAM role for each user attempting to access the API. A user will assume the role when making the API call. 47. A team has an application that detects new objects being uploaded into an Amazon bucket. The upload a trigger AWS Lambda function to write metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database. Which action should the team take to ensure high availability? A. Enable Cross-Region Replication to ensure high availability B. Create a Lambda function for each Availability Zone the application is deployed in C. Enable Multi-AZ on the RDS PostgreSQL database. D. Create a DynamoDB stream for the DynamoDB table None 48. A company is using Amazon S3 as its local repository for weekly analysis reports. One of the company-wide requirements is to secure data at rest using encryption. The company chooses Amazon 53 server-side encryption (SSE) how can the object be decrypted when a GET request is issued? A. the user needs a Put request to decrypt the object B. The user needs to decrypt the object using a private Key C. Amazon S3 manages encryption and decryption automatically D. Amazon S3 provides a server-side key for decrypting the object None 49. A company requires operating sys em permission on a relational database server. What should a solutions architect suggest as a configuration for a highly available database architecture? A. Multiple Amazon EC2 instances in a database replication configuration that uses two Availability Zones B. A standalone Amazon FC2 instance with a selected database installed C. Amazon RDS m a Multi-AZ configuration with Provisioned IOPS D. Multiple Amazon EC2 instances in a replication configuration that uses a placement group None 50. A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 123 KB m size. The company has millions of files but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while keeping the most accessed files readily available for its users. Which action should the company take to meet hese requirements MOST cost-effectively? A. Configure S3 Standard-infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects. B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days C. Configure S3 inventory to manage objects and move them to S3 Standard-infrequent Access (S3 Standard-IA) after 90 days D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 StandardInfrequent Access (S3 Standard-IA) after None 51. A company requires operating sys em permission on a relational database server. What should a solutions architect suggest as a configuration for a highly available database architecture? A. Multiple Amazon EC2 instances in a database replication configuration that uses two Availability Zones B. A standalone Amazon FC2 instance with a selected database installed C. Amazon RDS m a Multi-AZ configuration with Provisioned IOPS D. Multiple Amazon EC2 instances in a replication configuration that uses a placement group None 52. A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 123 KB m size. The company has millions of files but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while keeping the most accessed files readily available for its users. Which action should the company take to meet hese requirements MOST cost-effectively? A. Configure S3 Standard-infrequent Access (S3 Standard-IA) storage for the initial storage tier oF the objects B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days C. Configure S3 inventory to manage objects and move them to S3 Standard-infrequent Access (S3 Standard-IA) after 90 days D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 StandardInfrequent Access (S3 Standard-IA) after 90 day None 53. A solutions architect is designing a solution for a dynamic website, "example.com," that is deployed in two AWS Regions: Tokyo. Japan and Sydney. Australia. The architect wants to ensure that users located in Australia are directed to the website deployed in the Sydney AWS Region and users located in Japan are directed to the website in the Tokyo AWS Region when they browse to "example.com." Which service should the architect use to achieve this goal with the LEAST administrative effort? A. Amazon CloudFront with geolocation routing B. Amazon Route 53 C. Application Load Balancer D. Network Load Balancer deployed across multiple regions None 54. A solution architect is designing an application that will allow business users to upload objects to Amazon S3. The solution needs to maximize object durability. Objects also must be readily available at any time and for any length of time. Users will access objects frequently within the first 30 days after the objects are uploaded, but users are much less likely to access objects that are older than 30 days. Which solution meets these requirements Most cost-effectively? A. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the object to S3 Giacier after 30 days. B. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the object to S3 Standard- infrequent Access (S3 Standard-IA) after 30 days C. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the object to S3 Zoneinfrequent Access (S3 Zone-IA) after 30 day D. Store all the objects in S3 intelligent-Tiering with an S3 Lifecycle rule to transition the object to S3 Standard-infrequent Access (S3 Standard-IA) after 30 days. None 55. A company needs to retain application log files for a critical application for 10years. The application team regularly accesses logs from the past month for troubleshooting, but logs older than 1 month are rarely accessed. The application generates more than 10 TB of logs per month. Which storage option meets these requirements MOST cost-effectively? A. Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive B. Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive. C. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep Archive D. Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than 1 month old to S3 Glacier Deep Archive. None 56. A city has deployed a web application running on AmazonEC2 instances behind an Application Load Balancer (ALB). The Application's users have reported sporadic performance, which appears to be related to DDoS attacks originating from random IP addresses. The City needs a solution that requires minimal configuration changes and provides an audit trail for the DDoS source. Which solution meets these requirements? A. Enable an AWS WAF web ACL on the ALB and configure rules to block traffic from unknown sources B. Subscribe to Amazon inspector. Engage the AWS DDoS Resource Team (DRT) to integrate migrating controls into the service C. Subscribe to AWS shield advanced. Engage the AWS DDoS Response team (DRT) to integrate migrating controls into the service D. Create an Amazon CloudFront distribution for the application and set the ALB as the origin. Enable an AWS WAF web ACL on the distribution and configure rules to block traffic from unknown sources. None 57. A company has an application running as a service in Amazon Elastic Container Service (Amazon EC2) using the Amazon launch type. The application code makes AWS API calls to publish messages to Amazon Simple Queue Service (Amazon SQS). What is the MOST secure method of giving the application permission to publish messages to Amazon SQS? A. Use AWS identity and Access Management (IAM) to grant SQS permissions to the role used by the launch configuration for the Auto Scaling group of the ECS cluster. B. Create a new IAM user with SQS permissions. The update the task definition to declare the access key ID and secret access key as environment variables. C. Create a new IAM role with SQS permissions. The update the task definition to use this role for the task role setting. D. Update the security group used by the ECS cluster to allow access to Amazon SQS None 58. A solutions architect is designing an architecture that includes web application and database tiers. The web tier must be capable of auto scaling. The solutions architect has decided to separate each tier into its own subnets. The design includes two public subnets and four private subnets. The security team requires that tiers be able to communicate with each other only when there is a business need and that all other network traffic be blocked. What should the solutions architect do to meet these requirements? A. Create an Amazon GuardDuty source'destmation rule set to control communication B. Create one security group for all tiers to limit traffic to only the required source and destinat C. Create specific security groups for each tier to limit traffic to only the required source and destinations D. Create network ACLs in all six subnets to limit traffic to the sources and destinations required for the application to function None 59. A solutions architect is using an AWS Cloud Formation template to deploy a three-tier web application. The web application consists of a web tier and an application tier that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly accessible. The application EC2 instances need to access the DynamoDB tables without exposing API credentials in the template. What should the solutions architect do to meet these requirements? A. Create an IAM role to read the DynamoOB tables. Associate the role with the application instances by reference an instance profile B. Create an IAM role that has the required permissions to read and write from the DynamoOB tables. Add the role to the EC2 instance profile and associate the instance profile with the apphcanon instances C. Use the parameter section in the AWS CkHidFormaton template to have the user input access and secret keys from an already-created IAM user mat has the required permissions to read and write from the DynamoOB tables D. Create an IAM user m the AWS CioudFormation template that has the required permissions to read and write from the DynamoOB tables. Use the GetAti function to retrieve the access and secret keys and pass them to the application instances through the user data None 60. A company has an AWS Direct Connect connection from its corporate data center to its VPC in the useast-1 Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporation do not overlap. The company requires connectivity between two Regions and the data centers. The company needs a solution that is scalable while reducing operational overhead. What should a solutions architect do to meet these requirements? A. Set up inter-Region VPC peering between the VPC m us-east-1 and the VPCs in eu-west-2 B. Create private virtual interfaces from the Direct Connect connection in us-east-1 to the VPCs in eu-west-2 C. Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2 Use AWS VPN CloudHub to send and receive data between the data centers and each VPC D. Connect the existing Direct Connect connection to a Direct Connect gateway Route traffic from the virtual private gateways of the VPCs in each Region to the Direct Connect gateway None 61. A company has an application that uses overnight digital images of products on store shelves to analyze inventory data. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and obtains the images from an Amazon S3 bucket for its metadata to be processed by worker nodes for analysis. A solutions architect needs to ensure that every image is processed by the worker nodes. What should the solutions architect do to meet this requirement in the MOST cost-efficient way? A. Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group. B. Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images. C. Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On-Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks. D. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue. None 62. A company that operates a web application on premises is preparing to launch a newer version of the application on AWS. The company needs to route requests to either the AWS-hosted or the on-premises-hosted application based on the URL query string. The on-premises application is not available from the internet, and a VPN connection is established between Amazon VPC and the company's data center. The company wants to use an Application Load Balancer (ALB) for this launch. Which solution meets these requirements? A. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to each target group of each ALB. Route with Amazon Route 53 based on the URL query string B. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to the target group of each ALB. Create a software router on an EC2 instance based on the URL query string C. Use one ALB with two target groups: one for the AWS resource and one for on premises. Add hosts to each target group of the ALB. Configure listener rules based on the URL query string. D. Use one ALB with two AWS Auto Scaling groups: one for the AWS resource and one for on premises. Add hosts to each Auto Scaling group. Route with Amazon Route 53 based on the URL query string. None 63. A company is using AWS Organizations with two AWS accounts: Logistics and Sales. The Logistics account operates an Amazon Redshift cluster. The Sales account includes Amazon EC2 instances. The Sales account needs to access the Logistics account's Amazon Redshift cluster. What should a solutions architect recommend to meet this requirement MOST cost-effectively? A. Set up VPC sharing with the Logistics account as the owner and the Sales account as the participant to transfer the data. B. Create an AWS Lambda function in the Logistics account to transfer data to the Amazon EC2 instances in the Sales account. C. Create a snapshot of the Amazon Redshift cluster, and share the snapshot with the Sales account. In the Sales account, restore the cluster by using the snapshot ID that is shared by the Logistics account. D. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account. None 64. A company is migrating its applications to AWS. Currently, applications that run on premises generate hundreds of terabytes of data that is stored on a shared file system. The company is running an analytics application in the cloud that runs hourly to generate insights from this data. The company needs a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. The solution also must be able to handle occasional interruptions in internet connectivity. Which solutions should the company use for the data transfer to meet these requirements? A. AWS DataSync B. AWS Migration Hub C. AWS Snowball Edge Storage Optimized D. AWS Transfer for SFTP None 65. A solutions architect is designing the architecture for a new web application. The application will run on AWS Fargate containers with an Application Load Balancer (ALB) and an Amazon Aurora PostgreSQL database. The web application will perform primarily read queries against the database. What should the solutions architect do to ensure that the website can scale with increasing traffic? (Choose two.) A. Enable auto scaling on the ALB to scale the load balancer horizontally B. Configure Aurora Auto Scaling to adjust the number of Aurora Replicas in the Aurora cluster dynamically. C. Enable cross-zone load balancing on the ALB to distribute the load evenly across containers in all Availability Zones. D. Configure an Amazon Elastic Container Service (Amazon ECS) cluster in each Availability Zone to distribute the load across multiple Availability Zones E. Configure Amazon Elastic Container Service (Amazon ECS) Service Auto Scaling with a target tracking scaling policy that is based on CPU utilization. 1 out of 65 Time's upTime is Up!