[GCP] Google Cloud Certified - Professional Cloud Developer

Professional Cloud Developer is a Google Cloud Build and deploy scalable and highly available applications on the Platform It is a certification that demonstrates that you have the skills to
Using Google’s recommended practices and tools to leverage a fully managed service Build applications that are highly scalable and available.


Google Cloud Certified – Professional Cloud Developer Practice Exam (40 Q)

v2020-06-03


QUESTION 1

You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now find that your notification system is too slow for time critical problems.
What should you do?

  • A. Replace your entire monitoring platform with Stackdriver.
  • B. Install the Stackdriver agents on your Google Compute Engine instances.
  • C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform.
  • D. Migrate some traffic back to your old platform and perform AB testing on the two platforms concurrently.

Correct Answer: B

Reference:
Cloud Monitoring


QUESTION 2

You are planning to migrate a MySQL database to the managed Google Cloud SQL database for Google Google Cloud. You have Google Compute Engine virtual machine instances that will connect with this Google Cloud SQL instance. You do not want to whitelist IPs for the Google Compute Engine instances to be able to access Google Cloud SQL.
What should you do?

  • A. Enable private IP for the Google Cloud SQL instance.
  • B. Whitelist a project to access Google Cloud SQL, and add Google Compute Engine instances in the whitelisted project.
  • C. Create a role in Google Cloud SQL that allows access to the database from external instances, and assign the Google Compute Engine instances to that role.
  • D. Create a Google Cloud SQL instance on one project. Create Google Compute engine instances in a different project. Create a VPN between these two projects to allow internal access to Google Cloud SQL.

Correct Answer: C

Reference:
Connecting to Cloud SQL from external applications


QUESTION 3

You need to migrate an internal file upload API with an enforced 500-MB file size limit to Google App Engine.
What should you do?

  • A. Use FTP to upload files.
  • B. Use CPanel to upload files.
  • C. Use signed URLs to upload files.
  • D. Change the API to be a multipart file upload API.

Correct Answer: C

Reference:
wiki: Google Cloud Platform


QUESTION 4

Your teammate has asked you to review the code below. Its purpose is to efficiently add a large number of small rows to a Google BigQuery table.

BigQuery service = BigQueryOptions.newBuilder() .build().getService();
  public void writeToBigQuery (Collection<Map<String, String>> rows) {
    for (Map<String, String> row: rows) {
      InsertAllRequest insertRequest = InsertAllRequest.newBuilder(
        "datasetId", "tableId",
        InsertAllRequest. RowToInsert.of(row)).build();
      service.insertAll (insertRequest);
  }
}

Which improvement should you suggest your teammate make?

  • A. Include multiple rows with each request.
    B. Perform the inserts in parallel by creating multiple threads.
  • C. Write each row to a Google Cloud Storage object, then load into Google BigQuery.
  • D. Write each row to a Google Cloud Storage object in parallel, then load into Google BigQuery.

Correct Answer: B


QUESTION 5

You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE).
Callers of the service will exist within the same GKE cluster. You want clients to be able to get the IP address of the service.
What should you do?

  • A. Define a GKE Service. Clients should use the name of the A record in Google Cloud DNS to find the service’s cluster IP address.
  • B. Define a GKE Service. Clients should use the service name in the URL to connect to the service.
  • C. Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in the client container.
  • D. Define a GKE Endpoint. Clients should get the endpoint name from Google Cloud DNS.

Correct Answer: C


QUESTION 6

You are using Google Cloud Build to build and test application source code stored in Google Cloud Source Repositories.
The build process requires a build tool not available in the Google Cloud Build environment.
What should you do?

  • A. Download the binary from the internet during the build process.
  • B. Build a custom Google Cloud Builder image and reference the image in your build steps.
  • C. Include the binary in your Google Cloud Source Repositories repository and reference it in your build scripts.
  • D. Ask to have the binary added to the Google Cloud Build environment by filing a feature request against the Google Cloud Build public Issue Tracker.

Correct Answer: B


QUESTION 7

You are deploying your application to a Google Compute Engine virtual machine instance. Your application is configured to write its log files to disk. You want to view the logs in Stackdriver Logging without changing the application code.
What should you do?

  • A. Install the Stackdriver Logging Agent and configure it to send the application logs.
  • B. Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging.
  • C. Provide the log file folder path in the metadata of the instance to configure it to send the application logs.
  • D. Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.

Correct Answer: A


QUESTION 8

Your service adds text to images that it reads from Google Cloud Storage. During busy times of the year, requests to Google Cloud Storage fail with an HTTP 429 “Too Many Requests” status code.
How should you handle this error?

  • A. Add a cache-control header to the objects.
  • B. Request a quota increase from the GCP Console.
  • C. Retry the request with a truncated exponential backoff strategy.
  • D. Change the storage class of the Google Cloud Storage bucket to Multi-regional.

Correct Answer: C

Reference:
Usage Limits

QUESTION 9

Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new version of your application is released, your CI/CD tool updates the spec.template.spec.containers[0].image value to reference the Docker image of your new application version. When the Deployment object applies the change, you want to deploy at least 1 replica of the new version and maintain the previous replicas until the new replica is healthy.
Which change should you make to the GKE Deployment object shown below?

apiversion: apps/v1
kind: Deployment
metadata:
  name: ecommerce-frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ecommerce-frontend
template:
  metadata:
    labels:
      app: ecommerce-frontend
spec:
  containers:
  - name: ecommerce-frontend-webapp
    image: ecommerce-frontend-webapp:1.7.9
    ports:
    - container Port: 80
  • A. Set the Deployment strategy to RollingUpdate with maxSurge set to 0, maxUnavailable set to 1.
  • B. Set the Deployment strategy to RollingUpdate with maxSurge set to 1, maxUnavailable set to 0.
  • C. Set the Deployment strategy to Recreate with maxSurge set to 0, maxUnavailable set to 1.
  • D. Set the Deployment strategy to Recreate with maxSurge set to 1, maxUnavailable set to 0.

Correct Answer: D


QUESTION 10

You plan to make a simple HTML application available on the internet.
This site keeps information about FAQs for your application. The application is static and contains images, HTML, CSS, and Javascript. You want to make this application available on the internet with as few steps as possible.
What should you do?

  • A. Upload your application to Google Cloud Storage.
  • B. Upload your application to an Google App Engine environment.
  • C. Create a Google Compute Engine instance with Apache web server installed. Configure Apache web server to host the application.
  • D. Containerize your application first. Deploy this container to Google Kubernetes Engine (GKE) and assign an external IP address to the GKE pod hosting the application.

Correct Answer: A

Reference:
Hosting a static website


QUESTION 11

Your company has a Google BigQuery data mart that provides analytics information to hundreds of employees.
One user of wants to run jobs without interrupting important workloads. This user isn’t concerned about the time it takes to run these jobs. You want to fulfill this request while minimizing cost to the company and the effort required on your part.
What should you do?

  • A. Ask the user to run the jobs as batch jobs.
  • B. Create a separate project for the user to run jobs.
  • C. Add the user as a job.user role in the existing project.
  • D. Allow the user to run jobs when important workloads are not running.

Correct Answer: B


QUESTION 12

You want to notify on-call engineers about a service degradation in production while minimizing development time.
What should you do?

  • A. Use Google Cloud Function to monitor resources and raise alerts.
  • B. Use Google Cloud Pub/Sub to monitor resources and raise alerts.
  • C. Use Stackdriver Error Reporting to capture errors and raise alerts.
  • D. Use Stackdriver Monitoring to monitor resources and raise alerts.

Correct Answer: A


QUESTION 13

You are creating a Google Kubernetes Engine (GKE) cluster and run this command:

> gloud container clusters create large-cluster --num-nodes 200

The command fails with the error:

insufficient regional quota to satisfy request: resource "CPUS": request requires '200.0' and is short '176.0'. project has a quota of 24.0' with '24.0' available

You want to resolve the issue. What should you do?

  • A. Request additional GKE quota in the GCP Console.
  • B. Request additional Google Compute Engine quota in the GCP Console.
  • C. Open a support case to request additional GKE quota.
  • D. Decouple services in the cluster, and rewrite new clusters to function with fewer cores.

Correct Answer: A


QUESTION 14

Your company has a Google BigQuery dataset named “Master” that keeps information about employee travel and expenses.
This information is organized by employee department. That means employees should only be able to view information for their department. You want to apply a security framework to enforce this requirement with the minimum number of steps.
What should you do?

  • A. Create a separate dataset for each department. Create a view with an appropriate WHERE clause to select records from a particular dataset for the specific department. Authorize this view to access records from your Master dataset. Give employees the permission to this department-specific dataset.
  • B. Create a separate dataset for each department. Create a data pipeline for each department to copy appropriate information from the Master dataset to the specific dataset for the department. Give employees the permission to this department-specific dataset.
  • C. Create a dataset named Master dataset. Create a separate view for each department in the Master dataset. Give employees access to the specific view for their department.
  • D. Create a dataset named Master dataset. Create a separate table for each department in the Master dataset. Give employees access to the specific table for their department.

Correct Answer: B


QUESTION 15

You have an application in production.
It is deployed on Google Compute Engine virtual machine instances controlled by a managed instance group. Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to access your application. You want to implement a monitoring technique to alert you when the application is unavailable.
Which technique should you choose?

  • A. Smoke tests.
  • B. Stackdriver uptime checks.
  • C. Google Cloud Load Balancing – heath checks
  • D. Managed instance group – heath checks.

Correct Answer: B

Reference:
Stackdriver Monitoring Automation Part 3: Uptime Checks


QUESTION 16

You are developing an HTTP API hosted on a Google Compute Engine virtual machine instance that needs to be invoked by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?

  • A. Reserve a static external IP address and assign it to an HTTP(S) load balancing service’s forwarding rule. Clients should use this IP address to connect to the service.
  • B. Reserve a static external IP address and assign it to an HTTP(S) load balancing service’s forwarding rule. Then, define an A record in Google Cloud DNS. Clients should use the name of the A record to connect to the service.
  • C. Ensure that clients use Google Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/.
  • D. Ensure that clients use Google Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.

Correct Answer: D


QUESTION 17

Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
What should you do?

  • A. Add a Stackdriver counter metric for path:/api/alpha/.
  • B. Add a Stackdriver counter metric for endpoint:/api/alpha/*.
  • C. Export the logs to Google Cloud Storage and count lines matching /api/alpha.
  • D. Export the logs to Google Cloud Pub/Sub and count lines matching /api/alpha.

Correct Answer: C


QUESTION 18

You want to re-architect a monolithic application so that it follows a microservices model.
You want to accomplish this efficiently while minimizing the impact of this change to the business.
Which approach should you take?

  • A. Deploy the application to Google Compute Engine and turn on autoscaling.
  • B. Replace the application’s features with appropriate microservices in phases.
  • C. Refactor the monolithic application with appropriate microservices in a single effort and deploy it.
  • D. Build a new application with the appropriate microservices separate from the monolith and replace it when it is complete.

Correct Answer: C

Reference:
Migrating a monolithic application to microservices on Google Kubernetes Engine


QUESTION 19

You are using Google Cloud Build to build a Docker image.
You need to modify the build to execute unit and run integration tests. When there is a failure, you want the build history to clearly display the stage at which the build failed.
What should you do?

  • A. Add RUN commands in the Dockerfile to execute unit and integration tests.
  • B. Create a Google Cloud Build build config file with a single build step to compile unit and integration tests.
  • C. Create a Google Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration tests.
  • D. Create a Google Cloud Build build config file with separate cloud builder steps to compile and execute unit and integration tests.

Correct Answer: D


QUESTION 20

For this question, refer to the HipLocal case study.
HipLocal’s .net-based auth service fails under intermittent load.
What should they do?

  • A. Use Google App Engine for autoscaling.
  • B. Use Google Cloud Functions for autoscaling.
  • C. Use a Google Compute Engine cluster for the service.
  • D. Use a dedicated Google Compute Engine virtual machine instance for the service.

Correct Answer: D

Reference:
Autoscaling an Instance Group with Custom Cloud Monitoring Metrics


QUESTION 21

For this question, refer to the HipLocal case study.
HipLocal’s APIs are showing occasional failures, but they cannot find a pattern. They want to collect some metrics to help them troubleshoot.
What should they do?

  • A. Take frequent snapshots of all of the VMs.
  • B. Install the Stackdriver Logging agent on the VMs.
  • C. Install the Stackdriver Monitoring agent on the VMs.
  • D. Use Stackdriver Trace to look for performance bottlenecks.

Correct Answer: C


QUESTION 22

For this question, refer to the HipLocal case study.
Which service should HipLocal use to enable access to internal apps?

  • A. Google Cloud VPN.
  • B. Google Cloud Armor.
  • C. Virtual Private Cloud.
  • D. Google Cloud Identity-Aware Proxy.

.Correct Answer: D

Reference:
Overview of IAP for on-premises apps


QUESTION 23

For this question, refer to the HipLocal case study.
In order to meet their business requirements, how should HipLocal store their application state?

  • A. Use local SSDs to store state.
  • B. Put a memcache layer in front of MySQL.
  • C. Move the state storage to Google Cloud Spanner.
  • D. Replace the MySQL instance with Google Cloud SQL.

Correct Answer: B


QUESTION 24

For this question, refer to the HipLocal case study.
HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.
Which configuration should they choose?

  • A. Use the current single instance MySQL on Google Compute Engine and several read-only MySQL servers on Google Compute Engine.
  • B. Use the current single instance MySQL on Google Compute Engine, and replicate the data to Google Cloud SQL in an external master configuration.
  • C. Replace the current single instance MySQL instance with Google Cloud SQL, and configure high availability.
  • D. Replace the current single instance MySQL instance with Google Cloud SQL, and Google provides redundancy without further configuration.

Correct Answer: B


QUESTION 25

Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each cluster. The Deployment has created multiple replicas of your Pod in each cluster. You want to view the logs sent to stdout for all of the replicas in your Deployment in all clusters.
Which command should you use?

  • A. kubectl logs [PARAM]
  • B. gcloud logging read [PARAM]
  • C. kubectl exec –it [PARAM] journalctl
  • D. gcloud compute ssh [PARAM] –-command= “sudo journalctl”

Correct Answer: D


QUESTION 26

You are using Google Cloud Build to create a new Docker image on each source code commit to a Google Cloud Source Repositories repository. Your application is built on every commit to the master branch. You want to release specific commits made to the master branch in an automated method.
What should you do?

  • A. Manually trigger the build for new releases.
  • B. Create a build trigger on a Git tag pattern. Use a Git tag convention for new releases.
  • C. Create a build trigger on a Git branch name pattern. Use a Git branch naming convention for new releases.
  • D. Commit your source code to a second Google Cloud Source Repositories repository with a second Google Cloud Build trigger. Use this repository for new releases only.

Correct Answer: C

Reference:
Set up automated builds


QUESTION 27

You are designing a schema for a table that will be moved from MySQL to Google Cloud Bigtable. The MySQL table is as follows:

AccountActivity
(
Account_id int,
Event_timestamp datetime,
Transaction_type string,
Amount numeric (18,4)
) primary key (Account_id, Event_timestamp)

How should you design a row key for Google Cloud Bigtable for this table?

  • A. Set Account_id as a key.
  • B. Set Account_id_Event_timestamp as a key.
  • C. Set Event_timestamp_Account_id as a key.
  • D. Set Event_timestamp as a key.

Correct Answer: C


QUESTION 28

You are working on a social media application. You plan to add a feature that allows users to upload images. These images will be 2 MB – 1 GB in size. You want to minimize their infrastructure operations overhead for this feature.
What should you do?

  • A. Change the application to accept images directly and store them in the database that stores other user information.
  • B. Change the application to create signed URLs for Google Cloud Storage. Transfer these signed URLs to the client application to upload images to Google Cloud Storage.
  • C. Set up a web server on GCP to accept user images and create a file store to keep uploaded files. Change the application to retrieve images from the file store.
  • D. Create a separate bucket for each user in Google Cloud Storage. Assign a separate service account to allow write access on each bucket. Transfer service account credentials to the client application based on user information. The application uses this service account to upload images to Google Cloud Storage.

Correct Answer: B

Reference:
Uploading images directly to Cloud Storage using Signed URL


QUESTION 29

Your application performs well when tested locally, but it runs significantly slower when you deploy it to Google App Engine standard environment. You want to diagnose the problem.
What should you do?

  • A. File a ticket with Google Cloud Support indicating that the application performs faster locally.
  • B. Use Stackdriver Debugger Snapshots to look at a point-in-time execution of the application.
  • C. Use Stackdriver Trace to determine which functions within the application have higher latency.
  • D. Add logging commands to the application and use Stackdriver Logging to check where the latency problem occurs.

Correct Answer: D


QUESTION 30

Your Google App Engine standard configuration is as follows:

service: production
instance_class: B1

You want to limit the application to 5 instances.
Which code snippet should you include in your configuration?

  • A. manual_scaling:
    instances: 5
    min_pending_latency: 30ms
  • B. manual_scaling:
    max_instances: 5
    idle_timeout: 10m
  • C. basic_scaling:
    instances: 5
    min_pending_latency: 30ms
  • D. basic_scaling:
    max_instances: 5
    idle_timeout: 10m

Correct Answer: C


QUESTION 31

Your application is running on Google Compute Engine and is showing sustained failures for a small number of requests. You have narrowed the cause down to a single Google Compute Engine instance, but the instance is unresponsive to SSH.
What should you do next?

  • A. Reboot the machine.
  • B. Enable and check the serial port output.
  • C. Delete the machine and create a new one.
  • D. Take a snapshot of the disk and attach it to a new machine.

Correct Answer: A


QUESTION 32

You configured your Google Compute Engine instance group to scale automatically according to overall CPU usage.
However, your application’s response latency increases sharply before the cluster has finished adding up instances. You want to provide a more consistent latency experience for your end users by changing the configuration of the instance group autoscaler.
Which two configuration changes should you make? (Choose two.)

  • A. Add the label “AUTOSCALE” to the instance group template.
  • B. Decrease the cool-down period for instances added to the group.
  • C. Increase the target CPU usage for the instance group autoscaler.
  • D. Decrease the target CPU usage for the instance group autoscaler.
  • E. Remove the health-check for individual VMs in the instance group.

Correct Answer: A, C


QUESTION 33

You have an application controlled by a managed instance group. When you deploy a new version of the application, costs should be minimized and the number of instances should not increase. You want to ensure that, when each new instance is created, the deployment only continues if the new instance is healthy.
What should you do?

  • A. Perform a rolling-action with maxSurge set to 1, maxUnavailable set to 0.
  • B. Perform a rolling-action with maxSurge set to 0, maxUnavailable set to 1
  • C. Perform a rolling-action with maxHealthy set to 1, maxUnhealthy set to 0.
  • D. Perform a rolling-action with maxHealthy set to 0, maxUnhealthy set to 1.

Correct Answer: A

Reference:
Rolling out updates to MIGs


QUESTION 34

Your application requires service accounts to be authenticated to GCP products via credentials stored on its host Google Compute Engine virtual machine instances. You want to distribute these credentials to the host instances as securely as possible.
What should you do?

  • A. Use HTTP signed URLs to securely provide access to the required resources.
  • B. Use the instance’s service account Application Default Credentials to authenticate to the required resources.
  • C. Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host instance before starting the application.
  • D. Commit the credential JSON file into your application’s source repository, and have your CI/CD process package it with the software that is deployed to the instance.

Correct Answer: B

Reference:
Authorizing requests to Compute Engine


QUESTION 35

Your company is planning to migrate their on-premises Hadoop environment to the cloud.
Increasing storage cost and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture.
How should you proceed with the migration?

  • A. Migrate your data stored in Hadoop to Google BigQuery. Change your jobs to source their information from Google BigQuery instead of the on-premises Hadoop environment.
  • B. Create Google Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into the new one in Google Compute Engine instances.
  • C. Create a Google Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Google Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.
  • D. Create a Google Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data to Google Cloud Storage and leverage the Google Cloud Dataproc connector to run jobs on that data.

Correct Answer: D


QUESTION 36

Your data is stored in Google Cloud Storage buckets. Fellow developers have reported that data downloaded from Google Cloud Storage is resulting in slow API performance. You want to research the issue to provide details to the GCP support team.
Which command should you run?

A. gsutil test –o output.json gs://my-bucket
B. gsutil perfdiag –o output.json gs://my-bucket
C. gcloud compute scp example-instance:~/test-data –o output.json gs://my-bucket
D. gcloud services test –o output.json gs://my-bucket

Correct Answer: B

Reference:
Sometimes get super-slow download rates from Google Cloud Storage, severely impacting workflow


QUESTION 37

You are using Google Cloud Build build to promote a Docker image to Development, Test, and Production environments. You need to ensure that the same Docker image is deployed to each of these environments.
How should you identify the Docker image in your build?

  • A. Use the latest Docker image tag.
  • B. Use a unique Docker image name.
  • C. Use the digest of the Docker image.
  • D. Use a semantic version Docker image tag.

Correct Answer: D


QUESTION 38

Your company has created an application that uploads a report to a Google Cloud Storage bucket. When the report is uploaded to the bucket, you want to publish a message to a Google Cloud Pub/Sub topic. You want to implement a solution that will take a small amount to effort to implement.
What should you do?

  • A. Configure the Google Cloud Storage bucket to trigger Google Cloud Pub/Sub notifications when objects are modified.
  • B. Create an Google App Engine application to receive the file; when it is received, publish a message to the Google Cloud Pub/Sub topic.
  • C. Create a Google Cloud Function that is triggered by the Google Cloud Storage bucket. In the Google Cloud Function, publish a message to the Google Cloud Pub/Sub topic.
  • D. Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received, publish a message to the Google Cloud Pub/Sub topic.

Correct Answer: C

Reference:
Pub/Sub notifications for Cloud Storage


QUESTION 39

Your teammate has asked you to review the code below, which is adding a credit to an account balance in Google Cloud Datastore.
Which improvement should you suggest your teammate make?

  public Entity creditAccount (long accountId, long creditAmount) {
    Entity account = datastore.get
(keyFactory.newKey (accountid)) ;
    account = Entity.builder (account).set(
      "balance", account.getLong ("balance") + credit Amount) .build()
    datastore.put (account);
    return account;
  }
  • A. Get the entity with an ancestor query.
  • B. Get and put the entity in a transaction.
  • C. Use a strongly consistent transactional database.
  • D. Don’t return the account entity from the function.

Correct Answer: A


QUESTION 40

Your API backend is running on multiple cloud providers.
You want to generate reports for the network latency of your API.
Which two steps should you take? (Choose two.)

  • A. Use Zipkin collector to gather data.
  • B. Use Fluentd agent to gather data.
  • C. Use Stackdriver Trace to generate reports.
  • D. Use Stackdriver Debugger to generate report.
  • E. Use Stackdriver Profiler to generate report.

Correct Answer: C, E

Comments are closed