Exam: AWS Certified Solutions Architect - Professional

Total Questions: 118
Page of

A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications.
The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure.

What should a DevOps engineer do to meet these requirements?

A. Create one AWS CodeCommit repository for all applications. Put each application's code in different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.
B. Create one AWS CodeCommit repository for each of the applications Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.
C. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time to create one AMI for each server. Use AWS CloudFormation StackSets to
automatically provision and decommission Amazon EC2 fleets by using these AMIs.
D. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.
Answer: D โœ… Explanation: -This question is about simplifying multi-language, multi-framework, multi-platform applications and deploying them with centralized control, consistent CI/CD, and minimal infrastructure maintenance. ๐Ÿ” Why Option D is the best: Per-application repositories in AWS CodeCommit = clean separation and team autonomy. -CodeBuild = language-agnostic build system that works for any language or framework. -Docker images = perfect for packaging apps in any language or OS in a consistent way. -ECR (Elastic Container Registry) = centralized, secure image storage. -Amazon ECS with AWS Fargate: -Fargate = serverless container hosting โ€” no EC2 management required. ECS = container orchestration service. -CodeDeploy + ECS = fully automated CI/CD with rollback, blue/green deployments, etc. -Together, these services create a fully managed, automated, and language-independent DevOps pipeline โ€” exactly what the company needs.

A company wants to ensure that their EC2 instances are secure. They want to be notified if any new vulnerabilities are discovered on their instances, and they also want an audit trail of all login activities on the instances.

Which solution will meet these requirements?

A. Use AWS Systems Manager to detect vulnerabilities on the EC2 instances. Install the Amazon Kinesis Agent to capture system logs and deliver them to Amazon S3.
B. Use AWS Systems Manager to detect vulnerabilities on the EC2 instances. Install the Systems Manager Agent to capture system logs and view login activity in the CloudTrail console.
C. Configure Amazon CloudWatch to detect vulnerabilities on the EC2 instances. Install the AWS Config
daemon to capture system logs and view them in the AWS Config console.
D. Configure Amazon Inspector to detect vulnerabilities on the EC2 instances. Install the Amazon CloudWatch Agent to capture system logs and record them via Amazon CloudWatch Logs.
Answer: D โœ… Explanation: -This solution addresses both requirements: Vulnerability detection Login activity audit trail ๐Ÿ” How Option D meets the requirements: ๐Ÿ” Vulnerability Detection: Amazon Inspector: -Automatically scans EC2 instances for known software vulnerabilities (CVEs) and network exposure. -Provides continuous, automated vulnerability management. -Sends notifications via Amazon SNS or CloudWatch when new vulnerabilities are found. ๐Ÿ“œ Audit Trail of Logins: Amazon CloudWatch Agent: -Captures system-level logs like /var/log/secure (Linux) or Windows Event Logs. -These logs include SSH logins, failed login attempts, and user activity. -Logs can be sent to CloudWatch Logs for alerting, monitoring, and long-term storage.

A DevOps Engineer needs to back up sensitive Amazon S3 objects that are stored within an S3 bucket with a private bucket policy using the S3 cross-region replication functionality. The objects need to be copied to a target bucket in a different AWS Region and account.

Which actions should be performed to enable this replication? (Choose three.)

A. Create a replication IAM role in the source account.
B. Create a replication IAM role in the target account.
C. Add statements to the source bucket policy allowing the replication IAM role to replicate objects.
D. Add statements to the target bucket policy allowing the replication IAM role to replicate objects.
E. Create a replication rule in the source bucket to enable the replication.
F. Create a replication rule in the target bucket to enable the replication.
Answer: ADE โœ… Explanation -A. Create a replication IAM role in the source account: The source bucketโ€™s replication configuration requires an IAM role that grants Amazon S3 permission to replicate objects on your behalf. -D. Add statements to the target bucket policy allowing the replication IAM role to replicate objects: Since the target bucket is in a different account, the target bucket policy must explicitly allow the replication role from the source account to write objects into the target bucket. -E. Create a replication rule in the source bucket to enable the replication: You configure the replication in the source bucket by creating a replication rule that specifies the destination bucket, IAM role, and what objects to replicate.

A company is using Amazon EC2 for various workloads. Company policy requires that instances be managed centrally to standardize configurations. These configurations include standard logging, metrics, security assessments, and weekly patching.

How can the company meet these requirements? (Choose three.)

A. Use AWS Config to ensure all EC2 instances are managed by Amazon Inspector.
B. Use AWS Config to ensure all EC2 instances are managed by AWS Systems Manager.
C. Use AWS Systems Manager to install and manage Amazon Inspector, Systems Manager Patch Manager, and the Amazon CloudWatch agent on all instances.
D. Use Amazon Inspector to install and manage AWS Systems Manager, Systems Manager Patch Manager, and the Amazon CloudWatch agent on all instances.
E. Use AWS Systems Manager maintenance windows with Systems Manager Run Command to schedule
Systems Manager Patch Manager tasks. Use the Amazon CloudWatch agent to schedule Amazon Inspector assessment runs.
F. Use AWS Systems Manager maintenance windows with Systems Manager Run Command to schedule
Systems Manager Patch Manager tasks. Use Amazon CloudWatch Events to schedule Amazon Inspector
assessment runs.
Answer: BCF โœ… Explanation -A company is using Amazon EC2 for various workloads. Company policy requires that instances be managed centrally to standardize configurations. These configurations include standard logging, metrics, security assessments, and weekly patching. -How can the company meet these requirements? (Choose three.) A. Use AWS Config to ensure all EC2 instances are managed by Amazon Inspector. B. Use AWS Config to ensure all EC2 instances are managed by AWS Systems Manager. C. Use AWS Systems Manager to install and manage Amazon Inspector, Systems Manager Patch Manager, and the Amazon CloudWatch agent on all instances. D. Use Amazon Inspector to install and manage AWS Systems Manager, Systems Manager Patch Manager, and the Amazon CloudWatch agent on all instances. E. Use AWS Systems Manager maintenance windows with Systems Manager Run Command to schedule Systems Manager Patch Manager tasks. Use the Amazon CloudWatch agent to schedule Amazon Inspector assessment runs. F. Use AWS Systems Manager maintenance windows with Systems Manager Run Command to schedule Systems Manager Patch Manager tasks. Use Amazon CloudWatch Events to schedule Amazon Inspector assessment runs.

A business has an application that consists of five independent AWS Lambda functions.
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete.

What should the DevOps Engineer implement to BEST improve the speed of the pipeline?

A. Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
B. Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel.
C. Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder.
D. Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.
Answer: C โœ… Explanation -A business has an application that consists of five independent AWS Lambda functions. The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon --CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code. -After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete. -What should the DevOps Engineer implement to BEST improve the speed of the pipeline? A. Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput. B. Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel. C. Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder. D. Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.

A company is creating a software solution that executes a specific parallel-processing mechanism. The software can scale to tens of servers in some special scenarios. This solution uses a proprietary library that is license-based, requiring that each individual server have a single, dedicated license installed. The company has 200 licenses and is planning to run 200 server nodes concurrently at most.
The company has requested the following features:
- A mechanism to automate the use of the licenses at scale.
- Creation of a dashboard to use in the future to verify which licenses are available at any moment.

What is the MOST effective way to accomplish these requirements?

A. Upload the licenses to a private Amazon S3 bucket. Create an AWS CloudFormation template with a
Mappings section for the licenses. In the template, create an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the Mappings section. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
B. Upload the licenses to an Amazon DynamoDB table. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the DynamoDB table. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
C. Upload the licenses to a private Amazon S3 bucket. Populate an Amazon SQS queue with the list of licenses stored in S3. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script acquire an available license from SQS. Create an Auto Scaling lifecycle hook, then use it to put the license back in SQS after the instance is terminated.
D. Upload the licenses to an Amazon DynamoDB table. Create an AWS CLI script to launch the servers by using the parameter --count, with min:max instances to launch. In the user data script, acquire an available license from the DynamoDB table. Monitor each instance and, in case of failure, replace the instance, then manually update the DynamoDB table.
Answer: D โœ… Explanation -Key Requirements: Automate license use at scale โ€” assign one license per server automatically. -Dashboard to verify license availability at any time โ€” visibility into which licenses are in use or free. -Maximum 200 licenses for up to 200 concurrent servers. -The licenses must be individually tracked and released when instances terminate. -Analysis of options: A. Using CloudFormation Mappings for licenses Mappings in CloudFormation are static key-value pairs, not designed to be dynamically updated during runtime. -You cannot update mappings dynamically to track license allocation and release. -Hence, this does not support automation or dynamic tracking of licenses. => Not feasible for dynamic license allocation. B. Upload licenses to DynamoDB and update using Auto Scaling lifecycle hooks DynamoDB is a good fit for managing a dynamic inventory like licenses. Licenses can be stored as items in the table with a status attribute (e.g., "available", "in-use"). Instances can query DynamoDB during launch (via user data) to atomically acquire a license by marking it "in-use." On termination (via lifecycle hooks), the license can be released back to "available." DynamoDB also supports querying and scanning, making it straightforward to build a dashboard for license availability. C. Use SQS queue with license list and CloudFormation + Auto Scaling Using an SQS queue to hold licenses is creative. On instance startup, a license is received (dequeued) from SQS. On termination, the license is returned (re-enqueued). This naturally controls the concurrency since the number of licenses in the queue limits server licenses. SQS does not natively support querying all license statuses (no direct way to see which licenses are in use or available). Building a dashboard to view license availability would be complicated and would require external tracking. SQS visibility timeouts and message processing can cause edge cases if instances fail unexpectedly. D. DynamoDB + manual monitoring and manual license updates DynamoDB again is good for license tracking. -However, this option relies on manual updates to the license table on instance failures. -Manual intervention is error-prone and not automated. -This does not meet the requirement for automation.

A DevOps Engineer must track the health of a stateless RESTful service sitting behind a Classic Load Balancer. The deployment of new application revisions is through a CI/CD pipeline. If the service's latency increases beyond a defined threshold, deployment should be stopped until the service has recovered.

Which of the following methods allow for the QUICKEST detection time?

A. Use Amazon CloudWatch metrics provided by Elastic Load Balancing to calculate average latency. Alarm and stop deployment when latency increases beyond the defined threshold.
B. Use AWS Lambda and Elastic Load Balancing access logs to detect average latency. Alarm and stop
deployment when latency increases beyond the defined threshold.
C. Use AWS CodeDeploy's MinimumHealthyHosts setting to define thresholds for rolling back deployments. If these thresholds are breached, roll back the deployment.
D. Use Metric Filters to parse application logs in Amazon CloudWatch Logs. Create a filter for latency. Alarm and stop deployment when latency increases beyond the defined threshold.
Answer: A โœ… Explanation A. Use CloudWatch metrics from ELB to calculate average latency, then alarm and stop deployment ELB provides latency metrics every minute by default. -CloudWatch metric data resolution is typically 1 minute (standard) or 1 second (with detailed monitoring, but ELB metrics are standard by default). This method is simple and uses built-in metrics. Detection time is generally within a minute or so after latency spikes. B. Use Lambda + ELB access logs to detect average latency and alarm ELB access logs are delivered to S3 typically within 5 minutes or more after requests complete. Processing logs via Lambda involves delay: logs delivery โ†’ Lambda trigger โ†’ parsing โ†’ metric calculation. This can cause detection latency of several minutes. Not ideal for quick detection. C. Use CodeDeploy's MinimumHealthyHosts setting for rollback MinimumHealthyHosts tracks instance health based on health checks or success/failure of deployment lifecycle events. However, it does not directly monitor latency metrics. Latency increase might not affect health checks immediately, so detection could be slower or indirect. Also, this method reacts to instance health states, not latency specifically. D. Use Metric Filters to parse application logs in CloudWatch Logs and create latency alarms Application logs can be sent to CloudWatch Logs in near real-time (seconds delay). Metric Filters can extract latency metrics from logs almost immediately. Alarms can trigger based on this extracted data. This approach can detect latency spikes with very low latency, potentially in seconds. Requires your app to log latency explicitly in a structured way. -Conclusion ELB CloudWatch metrics (Option A) have ~1 minute delay. ELB access logs via Lambda (Option B) is much slower (minutes). CodeDeploy MinimumHealthyHosts (Option C) does not monitor latency directly. Metric Filters on CloudWatch Logs (Option D) provide the quickest detection as logs are nearly real-time and filters run continuously.

A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure.

Which solution will accomplish this?

A. Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to trigger an AWS Lambda function that will promote the replica instance as the master.
B. Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
C. Create an AWS Lambda function to modify the application's AWS Cloud Formation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to trigger this Lambda function after the failure event occurs.
D. Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge (Amazon CloudWatch Events) event that defects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.
Answer: D โœ… Explanation -A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure. -Which solution will accomplish this? A. Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to trigger an AWS Lambda function that will promote the replica instance as the master. B. Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance. C. Create an AWS Lambda function to modify the application's AWS Cloud Formation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to trigger this Lambda function after the failure event occurs. D. Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge (Amazon CloudWatch Events) event that defects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.

A company recently launched an application that is more popular than expected. The company wants to ensure the application can scale to meet increasing demands and provide reliability using multiple Availability Zones (AZs).
The application runs on a fleet of Amazon EC2 instances behind an Application Load
Balancer (ALB). A DevOps engineer has created an Auto Scaling group across multiple AZs for the application.

Instances launched in the newly added AZs are not receiving any traffic for the application.
What is likely causing this issue?

A. Auto Scaling groups can create new instances in a single AZ only.
B. The EC2 instances have not been manually associated to the ALB.
C. The ALB should be replaced with a Network Load Balancer (NLB).
D. The new AZ has not been added to the ALB.
Answer: D โœ… Explanation -When you use an Application Load Balancer (ALB) with an Auto Scaling group, traffic is distributed only to the Availability Zones (AZs) that are enabled in the ALBโ€™s configuration. If you add a new AZ to your Auto Scaling group but do not enable it in the ALB, the ALB will not route any traffic to the instances in that AZ โ€” even if they are healthy and part of the Auto Scaling group. -To fix this: Go to the ALB settings in the AWS Management Console. Ensure that the new AZ and its associated subnet are enabled under the ALBโ€™s โ€œAvailability Zonesโ€ settings. Once added, the ALB will begin routing traffic to instances in the newly added AZ.

A healthcare services company is concerned about the growing costs of software licensing for an application for monitoring patient wellness. The company wants to create an audit process to ensure that the application is running exclusively on Amazon EC2 Dedicated Hosts. A DevOps Engineer must create a workflow to audit the application to ensure compliance.

What steps should the Engineer take to meet this requirement with the LEAST administrative overhead?

A. Use AWS Systems Manager Configuration Compliance. Use calls to the put-compliance-items API action to scan and build a database of noncompliant EC2 instances based on their host placement configuration. Use an Amazon DynamoDB table to store these instance IDs for fast access. Generate a report through Systems Manager by calling the list-compliance-summaries API action.
B. Use custom Java code running on an EC2 instance. Set up EC2 Auto Scaling for the instance depending on the number of instances to be checked. Send the list of noncompliant EC2 instance IDs to an Amazon SQS queue. Set up another worker instance to process instance IDs from the SQS queue and write them to Amazon DynamoDB. Use an AWS Lambda function to terminate noncompliant instance IDs obtained from the queue, and send them to an Amazon SNS email topic for distribution.
C. Use AWS Config. Identify all EC2 instances to be audited by enabling Config Recording on all Amazon EC2 resources for the region. Create a custom AWS Config rule that triggers an AWS Lambda function by using the config-rule-change-triggered blueprint. Modify the Lambda evaluateCompliance() function to verify host placement to return a NON_COMPLIANT result if the instance is not running on an EC2 Dedicated Host. Use the AWS Config report to address noncompliant instances.
D. Use AWS CloudTrail. Identify all EC2 instances to be audited by analyzing all calls to the EC2 RunCommand API action. Invoke an AWS Lambda function that analyzes the host placement of the instance. Store the EC2 instance ID of noncompliant resources in an Amazon RDS MySQL DB instance. Generate a report by querying the RDS instance and exporting the query results to a CSV text file.
Answer: C โœ… Explanation: This option provides the most efficient, scalable, and low-maintenance solution: -AWS Config is designed for resource compliance auditing and automatically tracks changes in resource configurations, including EC2 instance host placement. -Custom AWS Config Rules let you run logic (via AWS Lambda) to check for Dedicated Host usage, which you can automate to return COMPLIANT or NON_COMPLIANT. -Config reports are easily accessible and provide a centralized view of compliance status. -Minimal administrative overhead: no need to manage additional infrastructure like EC2 instances, SQS queues, or manual scanning jobs.