Exam: AWS Certified Developer - Associate

Total Questions: 428
Page of

A gaming website gives users the ability to trade game items with each other on the platform. The platform requires both users' records to be updated and persisted in one transaction. If any update fails, the transaction must roll back.

Which AWS solution can provide the transactional capability that is required for this feature?


A. Amazon DynamoDB with operations made with the Consistent Read parameter set to true
B. Amazon ElastiCache for Memcached with operations made within a transaction block
C. Amazon DynamoDB with reads and writes made by using Transact* operations
D. Amazon Aurora MySQL with operations made within a transaction block
E. Amazon Athena with operations made within a transaction block
Answer: C ✅ Explanation -The scenario requires atomic transactions — either both users' records are updated successfully, or none of them are. Let's analyze the options: ✅ C. Amazon DynamoDB with reads and writes made by using Transact operations* Correct. -DynamoDB supports ACID transactions using TransactWriteItems and TransactGetItems. -These Transact* operations ensure all-or-nothing semantics, which is exactly what's required. -This is ideal for serverless, highly scalable applications like gaming platforms. ❌ A. Amazon DynamoDB with operations made with the Consistent Read parameter set to true Incorrect. Consistent Read ensures the latest data is read but does not provide transactional safety for writes. Does not prevent partial writes or failed updates. ❌ B. Amazon ElastiCache for Memcached with operations made within a transaction block Incorrect. Memcached does not support transactions. ElastiCache is a caching layer, not meant for durable transactional data. ✅ D. Amazon Aurora MySQL with operations made within a transaction block Also correct, but not the best answer in a serverless/cloud-native context like a gaming platform. Aurora supports SQL transactions, so it can provide transactional consistency. However, compared to DynamoDB, Aurora requires more management and is less scalable for high-volume, real-time use cases like item trading in games. ❌ E. Amazon Athena with operations made within a transaction block Incorrect. Athena is a query service for S3-based data lakes. It is not meant for transactional updates — it reads data, doesn't write/modify it in real-time. Final Answer: ✅ C. Amazon DynamoDB with reads and writes made by using Transact operations*

A company hosts a three-tier web application on AWS behind an Amazon CloudFront distribution. A developer wants a dashboard to monitor error rates and anomalies of the CloudFront distribution with the shortest possible refresh interval.

Which combination of slops should the developer take to meet these requirements? (Choose two.)

A. Activate real-time logs on the CloudFront distribution. Create a stream in Amazon Kinesis Data Streams.
B. Export the CloudFront logs to an Amazon S3 bucket. Detect anomalies and error rates with Amazon QuickSight.
C. Configure Amazon Kinesis Data Streams to deliver logs to Amazon OpenSearch Service (Amazon Elasticsearch Service). Create a dashboard in OpenSearch Dashboards (Kibana).
D. Create Amazon CloudWatch alarms based on expected values of selected CloudWatch metrics to detect anomalies and errors.
E. Design an Amazon CloudWatch dashboard of the selected CloudFront distribution metrics
Answer: AE ✅ Explanation ✅ A. Activate real-time logs on the CloudFront distribution. Create a stream in Amazon Kinesis Data Streams. Real-time logs for CloudFront allow you to capture log data with minimal delay (within seconds). Kinesis Data Streams can ingest this data immediately, enabling near-real-time processing and alerting. This is essential for fast detection of anomalies and errors. ✅ E. Design an Amazon CloudWatch dashboard of the selected CloudFront distribution metrics CloudFront integrates with Amazon CloudWatch, which provides built-in metrics like 4xx/5xx error rates and cache statistics. CloudWatch dashboards offer near real-time visualization (typically within 1-minute granularity). Suitable for building custom, live-updating dashboards. -Other Options: ❌ B. Export the CloudFront logs to an Amazon S3 bucket. Detect anomalies and error rates with Amazon QuickSight. This process involves batch export and delayed analysis. Not suitable for short refresh intervals or real-time monitoring. ✅/❌ C. Configure Amazon Kinesis Data Streams to deliver logs to Amazon OpenSearch Service (Amazon Elasticsearch Service). Create a dashboard in OpenSearch Dashboards (Kibana). This is a valid solution for real-time search and analytics, but it adds complexity. It could work, but requires more setup and management than options A and E. Acceptable, but not the most direct or AWS-native monitoring method. ✅ D. Create Amazon CloudWatch alarms based on expected values of selected CloudWatch metrics to detect anomalies and errors. This can complement a monitoring strategy, especially for alerting. However, it doesn’t provide a dashboard, and alarms alone aren't sufficient for a visual monitoring dashboard. It’s not strictly a dashboard solution, but helps in anomaly detection. ✅ Final Answer: A. Activate real-time logs on the CloudFront distribution. Create a stream in Amazon Kinesis Data Streams. E. Design an Amazon CloudWatch dashboard of the selected CloudFront distribution metrics.

A company has an online order website that uses Amazon DynamoDB to store item inventory. A sample of the inventory object is as follows:

Question
A developer needs to reduce all inventory prices by 100 as long as the resulting price would not be less than 500.

What should the developer do to make this change with the LEAST number of calls to DynamoDB?

A. Perform a DynamoDB Query operation with the Id. If the price is >= 600, perform an UpdateItem operation to update the price.
B. Perform a DynamoDB UpdateItem operation with a condition expression of "Price >= 600".
C. Perform a DynamoDB UpdateItem operation with a condition expression of "ProductCategory IN ({"S": "Sporting Goods"}) and Price 600".
D. Perform a DynamoDB UpdateItem operation with a condition expression of "MIN Price = 500".
Question image
Answer: B

A company is using an AWS Lambda function to process records from an Amazon Kinesis data stream. The company recently observed slow processing of the records. A developer notices that the iterator age metric for the function is increasing and that the Lambda run duration is constantly above normal.

Which actions should the developer take to increase the processing speed? (Choose two.)

A. Increase the number of shards of the Kinesis data stream.
B. Decrease the timeout of the Lambda function.
C. Increase the memory that is allocated to the Lambda function.
D. Decrease the number of shards of the Kinesis data stream.
E.Increase the timeout of the Lambda function.
Answer: AC ✅ Explanation ✅ Correct Answers: A. Increase the number of shards of the Kinesis data stream. Each shard maps to one Lambda invocation in parallel. Increasing the number of shards increases parallelism in processing. If your Lambda function is bottlenecked due to insufficient parallel processing, this will help reduce iterator age. C. Increase the memory that is allocated to the Lambda function. Increasing memory also increases the CPU resources for Lambda. This directly improves the execution speed of your function. Faster execution means lower duration and can help reduce backlog and iterator age. ❌ Incorrect Answers: B. Decrease the timeout of the Lambda function. Decreasing the timeout doesn’t improve performance. It increases the risk that longer-running executions will time out, potentially causing record loss or retry storms. D. Decrease the number of shards of the Kinesis data stream. This would reduce parallelism, which would worsen the problem, not fix it. E. Increase the timeout of the Lambda function. While this may prevent timeouts, it does not improve processing speed. The issue is not timeouts, it's slow processing and backlog — increasing timeout just allows longer slow runs, not faster processing. ✅ Final Answer: A. Increase the number of shards of the Kinesis data stream. C. Increase the memory that is allocated to the Lambda function.

A developer is making changes to a custom application that uses AWS Elastic Beanstalk.

Which solutions will update the Elastic Beanstalk environment with the new application version after the developer completes the changes? (Choose two.)

A.Package the application code into a .zip file. Use the AWS Management Console to upload the zip file and deploy the packaged application.
B.Package the application code into a .tar file. Use the AWS Management Console to create a new application version from the .tar file. Update the environment by using the AWS CLI.
C.Package the application code into a .tar file. Use the AWS Management Console to upload the .tar file and deploy the packaged application.
D.Package the application code into a .zip file. Use the AWS CLI to create a new application version from the .zip file and to update the environment.
E.Package the application code into a .zip file. Use the AWS Management Console to create a new application version from the .zip file. Rebuild the environment by using the AWS CLI.
Answer: AD ✅ Explanation A. Package the application code into a .zip file. Use the AWS Management Console to upload the zip file and deploy the packaged application. Correct. This is a common and valid way to deploy changes. The console allows uploading .zip files directly and selecting them for deployment to the environment. ❌ B. Package the application code into a .tar file. Use the AWS Management Console to create a new application version from the .tar file. Update the environment by using the AWS CLI. Incorrect. Elastic Beanstalk supports .zip files, not .tar files, for uploading application versions via the console or CLI. Een if the update step is correct, the file format is invalid. ❌ C. Package the application code into a .tar file. Use the AWS Management Console to upload the .tar file and deploy the packaged application. Incorrect. Again, .tar is not supported by Elastic Beanstalk. Only .zip is valid. ✅ D. Package the application code into a .zip file. Use the AWS CLI to create a new application version from the .zip file and to update the environment. Correct. This is the CLI method to deploy new application versions: bash Copy Edit aws elasticbeanstalk create-application-version --application-name MyApp --version-label v1 --source-bundle S3Bucket=my-bucket,S3Key=my-app.zip aws elasticbeanstalk update-environment --environment-name MyEnv --version-label v1 ❌ E. Package the application code into a .zip file. Use the AWS Management Console to create a new application version from the .zip file. Rebuild the environment by using the AWS CLI. Incorrect. Rebuilding the environment is more drastic — it terminates and recreates resources. Updating the environment is sufficient to deploy a new version. ✅ Final Answer: -A. Package the application code into a .zip file. Use the AWS Management Console to upload the zip file and deploy the packaged application. -D. Package the application code into a .zip file. Use the AWS CLI to create a new application version from the .zip file and to update the environment.

A company has an application where reading objects from Amazon S3 is based on the type of user. The user types are registered user and guest user. The company has 25,000 users and is growing. Information is pulled from an S3 bucket depending on the user type.

Which approaches are recommended to provide access to both user types? (Choose two.)

A. Provide a different access key and secret access key in the application code for registered users and guest users to provide read access to the objects.
B. Use S3 bucket policies to restrict read access to specific IAM users.
C. Use Amazon Cognito to provide access using authenticated and unauthenticated roles.
D. Create a new IAM user for each user and grant read access.
E. Use the AWS IAM service and let the application assume the different roles using the AWS Security Token Service (AWS STS) AssumeRole action depending on the type of user and provide read access to Amazon S3 using the assumed role.
Answer: CE ✅ Explanation ✅ Correct Answers: C. Use Amazon Cognito to provide access using authenticated and unauthenticated roles Best practice for web/mobile apps with different user types. -Cognito enables: Authenticated users (e.g., registered/logged-in users) to assume one IAM role. Unauthenticated users (guest access) to assume another IAM role. Each role can be given customized S3 permissions. Scalable, secure, and fully managed by AWS. -E. Use the AWS IAM service and let the application assume the different roles using the AWS Security Token Service (AWS STS) AssumeRole action depending on the type of user and provide read access to Amazon S3 using the assumed role Good choice for server-side applications. Allows dynamically assuming different IAM roles based on logic in your backend. You can define roles with specific S3 permissions and use STS to obtain temporary credentials for either user type. -Also secure and scalable.

A developer is writing an application to analyze the traffic to a fleet of Amazon EC2 instances. The EC2 instances run behind a public Application Load Balancer

(ALB). An HTTP server runs on each of the EC2 instances, logging all requests to a log file.

The developer wants to capture the client public IP addresses. The developer analyzes the log files and notices only the IP address of the ALB.

What must the developer do to capture the client public IP addresses in the log file?

A. Add a Host header to the HTTP server log configuration file.
B. Install the Amazon CloudWatch Logs agent on each EC2 instance. Configure the agent to write to the log file.
C. Install the AWS X-Ray daemon on each EC2 instance. Configure the daemon to write to the log file.
D. Add an X-Forwarded-For header to the HTTP server log configuration file.
Answer: D ✅ Explanation ✅ Correct Answer: -D. Add an X-Forwarded-For header to the HTTP server log configuration file. This tells the HTTP server (e.g., Apache, NGINX) to log the X-Forwarded-For header, which contains the original client IP address. -Most web servers support logging this with a custom log format.

A developer is writing a new AWS Serverless Application Model (AWS SAM) template with a new AWS Lambda function. The Lambda function runs complex code. The developer wants to test the Lambda function with more CPU power.

What should the developer do to meet this requirement?

A. Increase the runtime engine version
B. Increase the timeout.
C. Increase the number of Lambda layers.
D. Increase the memory
Answer: D ✅ Explanation ✅ D. Increase the memory Explanation: In AWS Lambda: Memory and CPU are linked — when you increase the memory allocation, AWS automatically allocates more CPU and other resources proportionally. This is the only way to increase CPU power for a Lambda function. Useful for complex or CPU-intensive code.

A developer uses a single AWS CloudFormation template to configure the test environment and the production environment for an application. The developer handles environment-specific requirements in the CloudFormation template.

The developer decides to update the Amazon EC2 Auto Scaling launch template with new Amazon Machine Images (AMIs) for each environment. The

CloudFormation update for the new AMIs is successful in the test environment, but the update fails in the production environment.

What are the possible causes of the CloudFormation update failure in the production environment? (Choose two.)

A. The new AMIs do not fulfill the specified conditions in the CloudFormation template.
B. The service quota for the number of EC2 vCPUs in the AWS Region has been exceeded.
C. The security group that is specified in the CloudFormation template does not exist
D. CloudFormation does not recognize the template change as an update.
E. CloudFormation does not have sufficient IAM permissions to make the changes.
Answer: AE ✅ Explanation ✅ Correct Answers: -B. The service quota for the number of EC2 vCPUs in the AWS Region has been exceeded Possible and common in production environments. Launching new EC2 instances with new AMIs might exceed vCPU or instance limits, especially if the new AMIs use larger instance types. This would cause Auto Scaling group updates to fail, and thus the CloudFormation update would also fail. -E. CloudFormation does not have sufficient IAM permissions to make the changes CloudFormation requires appropriate IAM permissions to update launch templates, Auto Scaling groups, etc. It's possible that the IAM roles used in production have more restrictive policies than those in test. Missing permissions will cause CloudFormation stack updates to fail. ❌ Incorrect Answers: -A. The new AMIs do not fulfill the specified conditions in the CloudFormation template If the AMI values didn’t meet the conditions, CloudFormation would skip the corresponding resources based on the condition logic. This wouldn’t cause an update to fail outright unless misconfigured — and it would fail in both environments. -C. The security group that is specified in the CloudFormation template does not exist If this were the case, the update would also fail in the test environment unless different values are hardcoded per environment. More likely, this is managed consistently across environments using parameters or mappings. -D. CloudFormation does not recognize the template change as an update If CloudFormation did not recognize a change, it would not attempt an update, not fail. This option is irrelevant in a failure scenario. ✅ Final Answer: B. The service quota for the number of EC2 vCPUs in the AWS Region has been exceeded E. CloudFormation does not have sufficient IAM permissions to make the changes

A developer is creating a serverless web application and maintains different branches of code. The developer wants to avoid updating the Amazon API Gateway target endpoint each time a new code push is performed.

What solution would allow the developer to perform a code push efficiently, without the need to update the API Gateway?

A. Associate different AWS Lambda functions to an API Gateway target endpoint.
B. Create different stages in API Gateway. then associate API Gateway with AWS Lambda.
C. Create aliases and versions in AWS Lambda.
D. Tag the AWS Lambda functions with different names.
Answer: C ✅ C. Create aliases and versions in AWS Lambda. ✅ Explanation -When using Amazon API Gateway integrated with AWS Lambda, it's a best practice to: -Publish versions of your Lambda function. -Create aliases (like dev, test, prod) that point to specific versions. -Associate API Gateway with the alias, not the $LATEST version. -This allows you to: -Push new code (publish a new version), -Point an existing alias (e.g., prod) to the new version, -Avoid having to update the API Gateway integration each time. -This setup makes deployments: -Safer (by controlling which version is live), -More efficient (you don’t reconfigure API Gateway), -Environment-friendly (multiple aliases can simulate branches or stages).