Exam: GCP: Professional Cloud Developer

Total Questions: 296
Page of

You want to upload files from an on-premises virtual machine to Google Cloud Storage as part of a data migration. These files will be consumed by Cloud

DataProc Hadoop cluster in a GCP environment.

Which command should you use?

A. gsutil cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
B. gcloud cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
C. hadoop fs cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
D. gcloud dataproc cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
Answer: A ✅ Explanation: gsutil is the standard command-line tool for interacting with Google Cloud Storage (GCS). -The cp command is used to copy files from your local file system to GCS. -This is the recommended way to upload files from an on-premises VM to GCS, especially for data migration.

Your company wants to expand their users outside the United States for their popular application. The company wants to ensure 99.999% availability of the database for their application and also wants to minimize the read latency for their users across the globe.

Which two actions should they take? (Choose two.)

A. Create a multi-regional Cloud Spanner instance with "nam-asia-eur1" configuration.
B. Create a multi-regional Cloud Spanner instance with "nam3" configuration.
C. Create a cluster with at least 3 Spanner nodes.
D. Create a cluster with at least 1 Spanner node.
E. Create a minimum of two Cloud Spanner instances in separate regions with at least one node.
F. Create a Cloud Dataflow pipeline to replicate data across different databases
Answer: AC ✅ Explanation -To achieve 99.999% availability and low global read latency, Cloud Spanner offers multi-regional instances with synchronous replication across continents. Here's why the chosen answers are correct: A. Create a multi-regional Cloud Spanner instance with "nam-asia-eur1" configuration This configuration spans North America, Europe, and Asia, offering global availability and low-latency reads close to users around the world. It is designed to deliver 99.999% availability SLA. Perfect for applications with a global user base. C. Create a cluster with at least 3 Spanner nodes A minimum of 3 nodes per region is recommended for production-level workloads. -It ensures high availability and performance through better resource distribution and fault tolerance. -For multi-regional Spanner, more nodes per region improves throughput and lowers latency.

You are developing a web application that will be accessible over both HTTP and HTTPS and will run on Compute Engine instances. On occasion, you will need to SSH from your remote laptop into one of the Compute Engine instances to conduct maintenance on the app. How should you configure the instances while following Google-recommended best practices?

A. Set up a backend with Compute Engine web server instances with a private IP address behind a TCP proxy load balancer.
B. Configure the firewall rules to allow all ingress traffic to connect to the Compute Engine web servers, with each server having a unique external IP address.
C. Configure Cloud Identity-Aware Proxy API for SSH access. Then configure the Compute Engine servers with private IP addresses behind an HTTP(s) load balancer for the application web traffic.
D. Set up a backend with Compute Engine web server instances with a private IP address behind an HTTP(S) load balancer. Set up a bastion host with a public IP address and open firewall ports. Connect to the web instances using the bastion host.
Answer: C ✅ Explanation -You are developing a web application that will be accessible over both HTTP and HTTPS and will run on Compute Engine instances. On occasion, you will need to SSH from your remote laptop into one of the Compute Engine instances to conduct maintenance on the app. How should you configure the instances while following Google-recommended best practices? A. Set up a backend with Compute Engine web server instances with a private IP address behind a TCP proxy load balancer. B. Configure the firewall rules to allow all ingress traffic to connect to the Compute Engine web servers, with each server having a unique external IP address. C. Configure Cloud Identity-Aware Proxy API for SSH access. Then configure the Compute Engine servers with private IP addresses behind an HTTP(s) load balancer for the application web traffic. D. Set up a backend with Compute Engine web server instances with a private IP address behind an HTTP(S) load balancer. Set up a bastion host with a public IP address and open firewall ports. Connect to the web instances using the bastion host.

You have a mixture of packaged and internally developed applications hosted on a Compute Engine instance that is running Linux. These applications write log records as text in local files. You want the logs to be written to Cloud Logging. What should you do?

A. Pipe the content of the files to the Linux Syslog daemon.
B. Install a Google version of fluentd on the Compute Engine instance.
C. Install a Google version of collectd on the Compute Engine instance.
D. Using cron, schedule a job to copy the log files to Cloud Storage once a day
Answer: B ✅ Explanation -To collect and send local log files from a Linux Compute Engine instance to Cloud Logging (formerly Stackdriver Logging), the recommended solution is to use the Cloud Logging agent, which is a Google-modified version of fluentd. -Fluentd is a widely used data collector that can be configured to read from local log files and forward them to Cloud Logging. -Google's version comes pre-configured to work well with GCP services and Cloud Logging.

You want to create `fully baked` or `golden` Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?

A. Embed the appropriate database connection string in the image. Create a different image for each environment.
B. When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.
C. When creating the Compute Engine instance, create a metadata item with a key of 'DATABASE' and a value for the appropriate database connection string. In your application, read the 'DATABASE' environment variable, and use the value to connect to the appropriate database.
D. When creating the Compute Engine instance, create a metadata item with a key of 'DATABASE' and a value for the appropriate database connection string. In your application, query the metadata server for the 'DATABASE' value, and use the value to connect to the appropriate database.
Answer: D ✅ Explanation -Using instance metadata is a standard and recommended approach in Google Cloud for customizing instance behavior at boot or runtime, especially for differentiating environments like test, staging, and production. -Here's why option D is best: -Metadata values are easily configurable at instance creation. -They avoid hardcoding sensitive details like connection strings inside the image. -Applications can query the metadata server from within the VM via a simple HTTP call: bash Copy Edit curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/DATABASE -H "Metadata-Flavor: Google" -This supports reusability of a single golden image across environments.

You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a

Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials?

A. Configure the appropriate service accounts, and use Workload Identity to run the pods.
B. Store the application credentials as Kubernetes Secrets, and expose them as environment variables.
C. Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.
D. Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.
Answer: A ✅ Explanation: -Workload Identity is the recommended and secure way to access Google Cloud services (like Spanner) from applications running on Google Kubernetes Engine (GKE). It allows Kubernetes service accounts to impersonate Google service accounts without the need to manage or distribute keys manually. -Why Option A is best: Security best practices: No long-lived credentials or secrets are stored in the pods. -Least privilege principle: You can grant minimal IAM permissions to the GCP service account. -No/minimal code changes: The application can use the default Google credentials provider, which works seamlessly with Workload Identity. -Fully managed by Google Cloud: This integrates tightly with IAM, is auditable, and reduces operational overhead.

You are deploying your application on a Compute Engine instance that communicates with Cloud SQL. You will use Cloud SQL Proxy to allow your application to communicate to the database using the service account associated with the application's instance. You want to follow the Google-recommended best practice of providing minimum access for the role assigned to the service account. What should you do?

A. Assign the Project Editor role.
B. Assign the Project Owner role.
C. Assign the Cloud SQL Client role.
D. Assign the Cloud SQL Editor role.
Answer: C ✅ Explanation: -To connect a Compute Engine instance to a Cloud SQL database using Cloud SQL Auth Proxy, the service account running the application must have the minimum required permissions. Google recommends using principle of least privilege, and for this use case, that is: roles/cloudsql.client (Cloud SQL Client) -This role allows: -Connecting to Cloud SQL instances via the Cloud SQL Proxy. -No excessive permissions like editing or deleting instances. -It’s sufficient for most applications that only need to connect and query data.

Your team develops stateless services that run on Google Kubernetes Engine (GKE). You need to deploy a new service that will only be accessed by other services running in the GKE cluster. The service will need to scale as quickly as possible to respond to changing load. What should you do?

A. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
B. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
C. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
D. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
Answer: C ✅ Explanation: -You are deploying a stateless service on GKE that: -Only needs to be accessed by other services in the same GKE cluster. -Needs to scale quickly in response to changing demand. -Let’s break down what’s needed: 🚀 Autoscaling: Horizontal Pod Autoscaler (HPA): ✅ Best suited for stateless services that need to scale out (add pods) based on CPU/memory usage or custom metrics. ✅ Faster and more responsive to changes in load than Vertical Pod Autoscaler. 🔁 HPA adjusts number of pod replicas dynamically, which is ideal for stateless workloads. -Vertical Pod Autoscaler (VPA): ❌ Adjusts resource limits (CPU/memory), not pod count. ❌ Typically used for batch or infrequently scaled workloads. ❌ Can cause pod restarts during scaling, which is not ideal for high availability.

You recently migrated a monolithic application to Google Cloud by breaking it down into microservices. One of the microservices is deployed using Cloud

Functions. As you modernize the application, you make a change to the API of the service that is backward-incompatible. You need to support both existing callers who use the original API and new callers who use the new API. What should you do?

A. Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use a load balancer to distribute calls between the versions.
B. Leave the original Cloud Function as-is and deploy a second Cloud Function that includes only the changed API. Calls are automatically routed to the correct function.
C. Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use Cloud Endpoints to provide an API gateway that exposes a versioned API.
D. Re-deploy the Cloud Function after making code changes to support the new API. Requests for both versions of the API are fulfilled based on a version identifier included in the call.
Answer: C ✅ Explanation: -You’re dealing with backward-incompatible changes to an API served by a Cloud Function. This means you must support both old and new clients simultaneously. -Here’s why option C is the best approach: 🔀 API Versioning and Cloud Endpoints Cloud Endpoints acts as an API gateway and supports API versioning, routing requests based on: URL path (e.g., /v1/, /v2/) Query parameters HTTP headers -You can deploy: Original Cloud Function (e.g., function-v1) for legacy clients. New Cloud Function (e.g., function-v2) with the updated API. -Cloud Endpoints can expose both versions under one unified gateway, helping you manage, secure, and monitor API traffic efficiently.

You are developing an application that will allow users to read and post comments on news articles. You want to configure your application to store and display user-submitted comments using Firestore. How should you design the schema to support an unknown number of comments and articles?

A. Store each comment in a subcollection of the article.
B. Add each comment to an array property on the article.
C. Store each comment in a document, and add the comment's key to an array property on the article.
D. Store each comment in a document, and add the comment's key to an array property on the user profile.
Answer: A ✅ Explanation: -Firestore is a NoSQL document database, and it's optimized for hierarchical, scalable data models. Here's why option A is the best approach: 🔹 Why Use a Subcollection? In Firestore, subcollections are the preferred way to model one-to-many relationships, such as: -One article → many comments By storing each comment as a document in a subcollection under its article document, you get: Efficient queries (e.g., retrieve all comments for a specific article) Easy pagination and filtering (e.g., sort by timestamp) No size limits hit on the parent document (unlike arrays)