Exam: AWS Certified DevOps Engineer - Professional

Total Questions: 486
Page of

A company wants to migrate its content sharing web application hosted on Amazon EC2 to a serverless
architecture. The company currently deploys changes to its application by creating a new Auto Scaling group of EC2 instances and a new Elastic Load Balancer, and then shifting the traffic away using an Amazon Route 53 weighted routing policy.
For its new serverless application, the company is planning to use Amazon API Gateway and AWS Lambda. The company will need to update its deployment processes to work with the new application. It will also need to retain the ability to test new features on a small number of users before rolling the features out to the entire user base.

Which deployment strategy will meet these requirements?

A. Use AWS CDK to deploy API Gateway and Lambda functions. When code needs to be changed, update the AWS CloudFormation stack and deploy the new version of the APIs and Lambda functions. Use a Route 53 failover routing policy for the canary release strategy.
B. Use AWS CloudFormation to deploy API Gateway and Lambda functions using Lambda function versions. When code needs to be changed, update the CloudFormation stack with the new Lambda code and update the API versions using a canary release strategy. Promote the new version when testing is complete.
C. Use AWS Elastic Beanstalk to deploy API Gateway and Lambda functions. When code needs to be changed, deploy a new version of the API and Lambda functions. Shift traffic gradually using an Elastic Beanstalk blue/green deployment.
D. Use AWS OpsWorks to deploy API Gateway in the service layer and Lambda functions in a custom layer. When code needs to be changed, use OpsWorks to perform a blue/green deployment and shift traffic gradually.
Answer: B ✅ Explanation -The company wants to: -Migrate to a serverless architecture (using API Gateway and AWS Lambda), -Deploy using a canary release strategy, meaning exposing a small percentage of users to new code before full rollout. ✅ Option B: Best fit Uses AWS CloudFormation for infrastructure-as-code. -Supports Lambda versions and aliases, which are key to canary deployments. -API Gateway supports canary releases via stage variables and deployment stages, which can be integrated with Lambda versions. -Promoting a version after testing completes is exactly how canary releases work.

A company's application is currently deployed to a single AWS Region. Recently, the company opened a new office on a different continent. The users in the new office are experiencing high latency. The company's application runs on Amazon EC2 instances behind an Application Load Balancer (ALB) and uses Amazon DynamoDB as the database layer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones.

A DevOps Engineer is tasked with minimizing application response times and improving availability for users in both Regions.

Which combination of actions should be taken to address the latency issues? (Choose three.)

A. Create a new DynamoDB table in the new Region with cross-Region replication enabled.
B. Create new ALB and Auto Scaling group global resources and configure the new ALB to direct traffic to the new Auto Scaling group.
C. Create new ALB and Auto Scaling group resources in the new Region and configure the new ALB to direct traffic to the new Auto Scaling group.
D. Create Amazon Route 53 records, health checks, and latency-based routing policies to route to the ALB.
E. Create Amazon Route 53 aliases, health checks, and failover routing policies to route to the ALB.
F. Convert the DynamoDB table to a global table.
Answer: CDF ✅ Explanation -To minimize application response times and improve availability across multiple continents, you need to: -Deploy application resources (EC2, ALB) in the new Region. -Ensure the database (DynamoDB) supports cross-Region replication. -Route user traffic based on latency to the closest Region. ✅ Explanation of Correct Answers: C. Create new ALB and Auto Scaling group resources in the new Region and configure the new ALB to direct traffic to the new Auto Scaling group. Why: You must deploy the application backend in the new Region for local users. -Ensures EC2-based compute resources are available near the users, reducing latency. D. Create Amazon Route 53 records, health checks, and latency-based routing policies to route to the ALB. Why: Latency-based routing in Route 53 sends each user to the Region with the lowest latency. Health checks ensure traffic is only sent to healthy endpoints. F. Convert the DynamoDB table to a global table. Why: DynamoDB Global Tables provide multi-Region, fully active replication. Ensures data consistency and local reads/writes, improving performance and availability.

A DevOps engineer used an AWS CloudFormation custom resource to set up AD Connector. The AWS Lambda
function executed and created AD Connector, but
CloudFormation is not transitioning from CREATE_IN_PROGRESS to CREATE_COMPLETE.
Which action should the engineer take to resolve this issue?
A. Ensure the Lambda function code has exited successfully.
B. Ensure the Lambda function code returns a response to the pre-signed URL.
C. Ensure the Lambda function IAM role has cloudformation:UpdateStack permissions for the stack ARN.
D. Ensure the Lambda function IAM role has ds:ConnectDirectory permissions for the AWS account.
Answer: B ✅ Explanation -When using AWS CloudFormation custom resources, the Lambda function must send a response to a pre-signed S3 URL provided by CloudFormation in the event data. This response tells CloudFormation whether the custom resource creation was successful or failed. ⚙️ What Happens: CloudFormation triggers the Lambda function. -The function performs the custom logic (e.g., creates an AD Connector). -CloudFormation waits for a response to the pre-signed S3 URL. -If no response is received, the stack remains in CREATE_IN_PROGRESS and eventually times out. ✅ Correct Option: B. Ensure the Lambda function code returns a response to the pre-signed URL. -This is required for CloudFormation to complete the operation on a custom resource. -If missing, CloudFormation hangs, just as described in the scenario.

A company plans to stop using Amazon EC2 key pairs for SSH access, and instead plans to use AWS Systems Manager Session Manager. To further enhance security, access to Session Manager must take place over a private network only.

Which combinations of actions will accomplish this? (Choose two.)

A. Allow inbound access to TCP port 22 in all associated EC2 security groups from the VPC CIDR range.
B. Attach an IAM policy with the necessary Systems Manager permissions to the existing IAM instance profile.
C. Create a VPC endpoint for Systems Manager in the desired Region.
D. Deploy a new EC2 instance that will act as a bastion host to the rest of the EC2 instance fleet.
E. Remove any default routes in the associated route tables.
Answer: BC ✅ Explanation -Key considerations: -Session Manager connects to the instance through the Systems Manager Agent (SSM Agent), which requires access to Systems Manager endpoints. -To keep traffic on the private network, VPC endpoints for Systems Manager services are used. -The instance profile must have the correct IAM permissions to allow Systems Manager operations. -Evaluate options: A. Allow inbound access to TCP port 22 in all associated EC2 security groups from the VPC CIDR range. No. If moving away from key pair SSH to Session Manager, port 22 access is not needed and doesn't affect Session Manager. This option is irrelevant. B. Attach an IAM policy with the necessary Systems Manager permissions to the existing IAM instance profile. Yes. For Session Manager to work, the instance profile (IAM role attached to EC2) must have permissions like AmazonSSMManagedInstanceCore. Necessary to enable instances to communicate with Systems Manager. C. Create a VPC endpoint for Systems Manager in the desired Region. Yes. To keep Session Manager traffic private (no internet), you must create VPC endpoints (Interface endpoints) for these services: com.amazonaws.<region>.ssm com.amazonaws.<region>.ec2messages com.amazonaws.<region>.ssmmessages This keeps communication inside the AWS network without going through the internet. D. Deploy a new EC2 instance that will act as a bastion host to the rest of the EC2 instance fleet. No. This is counter to the intent to move away from key pair-based SSH and to use Session Manager, which eliminates the need for bastion hosts. E. Remove any default routes in the associated route tables. No. Removing default routes (like internet gateway routes) would isolate the subnet, but without VPC endpoints, the instances cannot communicate with Systems Manager. The proper solution is to create VPC endpoints, not just remove default routes. -Correct Answers: B. Attach an IAM policy with the necessary Systems Manager permissions to the existing IAM instance profile. C. Create a VPC endpoint for Systems Manager in the desired Region.

A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps
Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.

How should the DevOps Engineer overcome this?

A. Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function
B. Add an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database
changes before allowing the new version of the Lambda function to respond
C. Add a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes
before deploying the new version of the Lambda function
D. Add a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
Answer: A

A software company wants to automate the build process for a project where the code is stored in GitHub. When the repository is updated, source code should be compiled, tested, and pushed to Amazon S3.

Which combination of steps would address these requirements? (Choose three.)

A. Add a buildspec.yml file to the source code with build instructions.
B. Configure a GitHub webhook to trigger a build every time a code change is pushed to the repository.
C. Create an AWS CodeBuild project with GitHub as the source repository.
D. Create an AWS CodeDeploy application with the Amazon EC2/On-Premises compute platform.
E. Create an AWS OpsWorks deployment with the install dependencies command.
F. Provision an Amazon EC2 instance to perform the build.
Answer: ABC ✅ Explanation Scenario: -Code is stored in GitHub. -When the repo is updated, the code should be compiled, tested, and then the artifacts pushed to Amazon S3 (a storage service). -The goal is to automate this entire build process. -Key points: Source control: GitHub -Trigger: On every code change (push to GitHub) -Build steps: Compile, test, and then push artifacts to S3 -Automation tools: AWS native services preferred (CodeBuild, CodePipeline, etc.) -Reviewing the options: A. Add a buildspec.yml file to the source code with build instructions. This is required for CodeBuild to know how to build and test your code. The buildspec.yml contains phases like install, pre_build, build, post_build with commands, including pushing to S3. So A is needed. B. Configure a GitHub webhook to trigger a build every time a code change is pushed to the repository. To automate build triggering on GitHub push events, a webhook is needed. Alternatively, CodeBuild can be connected directly to GitHub and can automatically poll or use webhooks internally, but often you configure a webhook. B is relevant. C. Create an AWS CodeBuild project with GitHub as the source repository. CodeBuild is the service that runs build and test jobs. You specify GitHub as the source. This is essential for compiling and testing the code automatically. So C is needed. D. Create an AWS CodeDeploy application with the Amazon EC2/On-Premises compute platform. CodeDeploy is for deploying applications to EC2 instances or on-premises servers. The requirement is just building and pushing artifacts to S3, no deployment to EC2. D is not required. E. Create an AWS OpsWorks deployment with the install dependencies command. OpsWorks is a configuration management service to manage servers. This is more complex than needed. The requirement does not mention configuration management or deployment to servers. E is not relevant. F. Provision an Amazon EC2 instance to perform the build. -This would be a manual approach. Instead of using managed services like CodeBuild, you could run builds on EC2. -But since AWS CodeBuild exists and can be connected to GitHub, this is not necessary and not the best practice. F is not required. Final answer: The three correct steps are: A. Add a buildspec.yml file to the source code with build instructions. B. Configure a GitHub webhook to trigger a build every time a code change is pushed to the repository. C. Create an AWS CodeBuild project with GitHub as the source repository.

An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance.

When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region.

How should the company meet these requirements with the LEAST amount of application changes?

A. Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
B. Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer
information and purchases.
C. Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.
D. Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
Answer: B ✅ Explanation -Requirements: Global Expansion: The application will be deployed to multiple AWS regions (Europe and Asia). Product Catalog: Must be shared globally across all regions. Needs to be consistent across deployments. Customer Data & Purchases: -Must remain within each region for compliance (likely data residency laws like GDPR). -Least amount of application changes is a priority. Evaluating the options: A. Use Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases. ❌ Amazon Redshift is a data warehouse, not optimized for transactional workloads (OLTP), which are typical for a product catalog. ❌ Redshift is not ideal for frequently updated, globally available catalog data. ✅ DynamoDB regional tables make sense for regional customer data, but overall, this solution would require significant app changes. Not the best choice. B. Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases. ✅ DynamoDB Global Tables are designed for multi-region, active-active replication. This allows fast, local reads/writes to the shared product catalog in each region. ✅ You can create regional DynamoDB tables for customer data, meeting compliance and data residency rules. ✅ Very minimal application changes if already using or compatible with DynamoDB. This is a strong option. C. Use Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases. ❌ Aurora read replicas across regions are read-only, so you can’t do multi-region writes to the catalog without extra logic. ❌ Promoting read replicas or using Aurora Global Database is more complex and may require application logic to manage writes. ✅ Local Aurora for customer data is okay. ❌ More application changes needed to handle read-only vs. write logic and failover. Not ideal for least application changes. D. Use Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases. ❌ This is inverted: product catalog is shared globally, so DynamoDB global tables would be more appropriate for it, not for customer data. ❌ Using global tables for customer data violates data residency compliance since data would be replicated across regions. -This violates compliance requirements. ✅ Correct Answer: B. Use Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases. This satisfies: -Global access and consistency for product catalog (with global tables), -Regional isolation for customer data, meeting compliance, -Minimal application changes, thanks to native DynamoDB support for this architecture.

A DevOps Engineer administers an application that manages video files for a video production company. The application runs on Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS PostgreSQL Multi-AZ DB instance, and the video files are stored in an Amazon S3 bucket. On a typical day, 50 GB of new video are added to the S3 bucket. The Engineer must implement a multi-region disaster recovery plan with the least data loss and the lowest recovery times. The current application infrastructure is already described using AWS CloudFormation.

Which deployment option should the Engineer choose to meet the uptime and recovery objectives for the system?

A. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Create an Amazon RDS read replica in the second region. In the second region, enable cross-region replication between the original S3 bucket and a new S3 bucket. To fail over, promote the read replica as master. Update the CloudFormation stack and increase the capacity of the Auto Scaling group.

B. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Create a scheduled task to take daily Amazon RDS cross-region snapshots to the second region. In the second region, enable cross-region replication between the original S3 bucket and Amazon Glacier. In a disaster, launch a new application stack in the second region and restore the database from the most recent snapshot.

C. Launch the application from the CloudFormation template in the second region, which sets the capacity of the Auto Scaling group to 1. Use Amazon CloudWatch Events to schedule a nightly task to take a snapshot of the database, copy the snapshot to the second region, and replace the DB instance in the second region from the snapshot. In the second region, enable cross-region replication between the original S3 bucket and a new S3 bucket. To fail over, increase the capacity of the Auto Scaling group.

D. Use Amazon CloudWatch Events to schedule a nightly task to take a snapshot of the database and copy the snapshot to the second region. Create an AWS Lambda function that copies each object to a new S3 bucket in the second region in response to S3 event notifications. In the second region, launch the application from the CloudFormation template and restore the database from the most recent snapshot.
Answer: A ✅ Explanation -Requirements: Application: EC2 + ALB + Auto Scaling across AZs. Database: Amazon RDS PostgreSQL (Multi-AZ). File storage: Amazon S3 bucket (50 GB of new videos daily). Goal: Multi-region DR with: Least data loss (low RPO) Lowest recovery time (low RTO) CloudFormation is used for infrastructure Cost-efficiency 🔍 Option A Launch application in second region (ASG = 1). RDS read replica in second region. Cross-region S3 replication to second bucket. Failover by promoting replica and scaling ASG. ✅ Pros: -Read replica: Provides near real-time replication (low RPO). -Cross-region S3 replication: Near real-time S3 object sync (low RPO). -Low RTO: Pre-launched stack reduces time to scale up in disaster. -Least manual effort during failover: Promote replica and scale. 🚫 Cons: -Slightly higher cost due to read replica and replication traffic. -You still need to scale ASG on failover manually (but fast with CFN). 🟢 Best balance of low RTO/RPO and cost.

A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline to deploy an application to Amazon ECS using the blue/green deployment model. The company wants to implement scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less. If errors are discovered during these tests, the application must be rolled back.

Which strategy will meet these requirements?

A. Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create an execution environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.

B. Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to execute an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.

C. Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to trigger rollback.

D. Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.
Answer: C ✅ Explanation: -You are using AWS CodeDeploy with Amazon ECS and the blue/green deployment model. In this model, CodeDeploy automatically handles traffic shifting between old (blue) and new (green) environments, and it includes lifecycle events that allow you to run tests before shifting live traffic. 🔍 What’s AfterAllowTestTraffic? This lifecycle event is triggered after test traffic is routed to the green environment, but before production traffic is shifted. -It gives you an opportunity to run tests against the green version. -If a hook fails during this stage (e.g., Lambda returns an error or fails), CodeDeploy automatically rolls back the deployment. 💡 Why Option C works best: AppSpec hooks are the right place to run deployment lifecycle logic. AfterAllowTestTraffic is designed specifically to test green environment before traffic shifting. Failing the hook (e.g., by returning an error) automatically triggers rollback — exactly what is required.

A company requires an RPO of 2 hours and an RTO of 10 minutes for its data and application at all times. An application uses a MySQL database and Amazon EC2 web servers. The development team needs a strategy for failover and disaster recovery.

Which combination of deployment strategies will meet these requirements? (Choose two.)

A. Create an Amazon Aurora cluster in one Availability Zone across multiple Regions as the data store. Use Aurora's automatic recovery capabilities in the event of a disaster.
B. Create an Amazon Aurora global database in two Regions as the data store. In the event of a failure, promote the secondary Region as the master for the application.
C. Create an Amazon Aurora multi-master cluster across multiple Regions as the data store. Use a Network Load Balancer to balance the database traffic in different Regions.
D. Set up the application in two Regions and use Amazon Route 53 failover-based routing that points to the Application Load Balancers in both Regions. Use health checks to determine the availability in a given Region. Use Auto Scaling groups in each Region to adjust capacity based on demand.
E. Set up the application in two Regions and use a multi-Region Auto Scaling group behind Application Load Balancers to manage the capacity based on demand. In the event of a disaster, adjust the Auto Scaling group's desired instance count to increase baseline capacity in the failover Region.
Answer: BD ✅ Explanation ✅ B. Aurora Global Database Aurora global databases are designed for low-latency global reads and disaster recovery across AWS Regions. -Replication lag is typically under 1 second, giving you a very low RPO — well within 2 hours. -If the primary Region fails, you can promote the secondary Region to be the new primary within minutes — achieving RTO ≤ 10 mins. -Ideal for mission-critical multi-Region applications. ✅ D. Multi-Region Active-Passive Application with Route 53 Failover Route 53 with failover routing policy automatically redirects traffic to the healthy Region. -You can use health checks on ALBs to detect Region availability. -Auto Scaling groups in both Regions ensure app scalability and readiness. -This architecture enables fast failover (low RTO) and redundancy.