[GCP] Google Cloud Certified:Professional Cloud Architect

Ace Your Professional Cloud Architect Certification with Practice Exams.

Google Cloud Certified – Professional Cloud Architect – Practice Exam (Question 53)


Question 01

For this question, refer to the JencoMart case study.
The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources.
What Google domain and project structure should you recommend?

  • A. Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.
  • B. Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.
  • C. Create a single G Suite account to manage users with each stage of each application in its own project.
  • D. Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Correct Answer: D

Note: The principle of least privilege and separation of duties are concepts that, although semantically different, are intrinsically related from the standpoint of security. The intent behind both is to prevent people from having higher privilege levels than they actually need
– Principle of Least Privilege: Users should only have the least amount of privileges required to perform their job and no more. This reduces authorization exploitation by limiting access to resources such as targets, jobs, or monitoring templates for which they are not authorized.
– Separation of Duties: Beyond limiting user privilege level, you also limit user duties, or the specific jobs they can perform. No user should be given responsibility for more than one related function. This limits the ability of a user to perform a malicious action and then cover up that action.

Reference contents:
Separation of duties | Cloud KMS Documentation


Question 02

For this question, refer to the JencoMart case study.
A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections.
It is still serving database requests to the application servers correctly.
What three steps should you take to diagnose the problem? (Choose 3 answers)

  • A. Delete the virtual machine (VM) and disks and create a new one.
  • B. Delete the instance, attach the disk to a new VM, and investigate.
  • C. Take a snapshot of the disk and connect to a new machine to investigate.
  • D. Check inbound firewall rules for the network the machine is connected to.
  • E. Connect the machine to another network with very simple firewall rules and investigate.
  • F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Correct Answer: C, D, F

D: Handling “Unable to connect on port 22” error message
Possible causes include:
– There is no firewall rule allowing SSH access on the port. SSH access on port 22 is enabled on all Google Compute Engine instances by default. If you have disabled access, SSH from the Browser will not work. If you run SSH on a port other than 22, you need to enable the access to that port with a custom firewall rule.
– The firewall rule allowing SSH access is enabled, but is not configured to allow connections from Google Cloud Console services. Source IP addresses for browser- based SSH sessions are dynamically allocated by Google Cloud Console and can vary from session to session.
F: Handling “Could not connect, retrying…” error
You can verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do not see these output prefixes in the serial console output, the daemon might be stopped. Reboot the instance to restart the daemon.

Reference contents:
SSH from the browser | Compute Engine Documentation


Question 03

For this question, refer to the JencoMart case study.
JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE).
During the migration, the existing infrastructure will need access to Google Cloud Datastore to upload the data.
What service account key-management strategy should you recommend?

  • A. Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).
  • B. Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.
  • C. Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs.
  • D. Deploy a custom authentication service on GCE/Google Kubernetes Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Correct Answer: C

Migrating data to Google Cloud Platform
Let’s say that you have some data processing that happens on another cloud provider and you want to transfer the processed data to Google Cloud Platform. You can use a service account from the virtual machines on the external cloud to push the data to Google Cloud Platform. To do this, you must create and download a service account key when you create the service account and then use that key from the external process to call the Google Cloud Platform APIs.

Reference contents:
Understanding service accounts | Cloud IAM Documentation


Question 04

For this question, refer to the JencoMart case study.
JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia.
You want to measure success against their business and technical goals.
Which metrics should you track?

  • A. Error rates for requests from Asia.
  • B. Latency difference between US and Asia.
  • C. Total visits, error rates, and latency from Asia.
  • D. Total visits and average latency for users from Asia.
  • E. The number of character sets present in the database.

Correct Answer: D

From scenario:
– Business Requirements include: Expand services into Asia
– Technical Requirements include: Decrease latency in Asia


Question 05

For this question, refer to the JencoMart case study.
The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly.
The infrastructure is shown in the diagram. You want to maximize throughput.
What are three potential bottlenecks? (Choose 3 answers)

Professional Cloud Architect:アーキテクチャ
  • A. A single VPN tunnel, which limits throughput.
  • B. A tier of Google Cloud Storage that is not suited for this task.
  • C. A copy command that is not suited to operate over long distances.
  • D. Fewer virtual machines (VMs) in GCP than on-premises machines.
  • E. A separate storage layer outside the VMs, which is not suited for this task.
  • F. Complicated internet connectivity between the on-premises infrastructure and GCP.

Correct Answer: A, C, E


Question 06

For this question, refer to the JencoMart case study.
JencoMart wants to move their User Profiles database to Google Cloud Platform.
Which Google Database should they use?

  • A. Google Cloud Spanner
  • B. Google BigQuery
  • C. Google Cloud SQL
  • D. Google Cloud Datastore

Correct Answer: D

Common workloads for Google Cloud Datastore:
– User profiles
– Product catalogs
– Game state

Reference contents:
Cloud Storage Options
Datastore Overview | Cloud Datastore Documentation


Question 07

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy.
How should the test coverage differ from their existing backends on the other platforms?

  • A. Tests should scale well beyond the prior approaches.
  • B. Unit tests are no longer required, only end-to-end tests.
  • C. Tests should be applied after the release is in the production environment.
  • D. Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Correct Answer: A

From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity.


Question 08

For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP).
You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way.
How should you design the process?

  • A. Create a scalable environment in GCP for simulating production load.
  • B. Use the existing infrastructure to test the GCP-based backend at scale
  • C. Build stress tests into each component of your application using resources internal to GCP to simulate load.
  • D. Create a set of static environments in GCP to test different levels of load – for example, high, medium, and low.

Correct Answer: A

From scenario: Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro


Question 09

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a continuous delivery pipeline.
Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:
– Services are deployed redundantly across multiple regions in the US and Europe
– Only frontend services are exposed on the public internet
– They can provide a single frontend IP for their fleet of services
– Deployment artifacts are immutable
Which set of products should they use?

  • A. Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine
  • B. Google Cloud Storage, Google App Engine, Google Network Load Balancer
  • C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer
  • D. Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Correct Answer: B


Question 10

For this question, refer to the Mountkirk Games case study.
Mountkirk Games’ gaming servers are not automatically scaling properly.
Last month, they rolled out a new feature, which suddenly became very popular.
A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times.
What should they investigate first?

  • A. Verify that the database is online.
  • B. Verify that the project quota hasn’t been exceeded.
  • C. Verify that the new feature code did not introduce any performance bugs.
  • D. Verify that the load-testing team is not running their tool against production.

Correct Answer: B

503 is a service unavailable error. If the database was online everyone would get the 503 error.

Reference contents:
Troubleshooting response errors


Question 11

For this question, refer to the Mountkirk Games case study.
Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments.
Developers and testers can access each other’s environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.
What should you do to isolate development environments from staging and production?

  • A. Create a project for development and test and another for staging and production.
  • B. Create a network for development and test and another for staging and production.
  • C. Create one subnetwork for development and another for staging and production.
  • D. Create one project for development, a second for staging and a third for production.

Correct Answer: A

Reference contents:
Google App Engine Go 1.12+ Standard Environment documentation
Best practices for enterprise organizations | Documentation


Question 12

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a real-time analytics platform for their new game.
The new platform must meet their technical requirements.
Which combination of Google technologies will meet all of their requirements?

  • A. Google Kubernetes Engine, Google Cloud Pub/Sub, and Google Cloud SQL
  • B. Google Cloud Dataflow, Google Cloud Storage, Google Cloud Pub/Sub, and Google BigQuery
  • C. Google Cloud SQL, Google Cloud Storage, Google Cloud Pub/Sub, and Google Cloud Dataflow
  • D. Google Cloud Dataproc, Google Cloud Pub/Sub, Google Cloud SQL, and Google Cloud Dataflow
  • E. Google Cloud Pub/Sub, Google Compute Engine, Google Cloud Storage, and Google Cloud Dataproc

Correct Answer: B

Ingest millions of streaming events per second from anywhere in the world with Google Cloud Pub/Sub, powered by Google’s unique, high-speed private network. Process the streams with Google Cloud Dataflow to ensure reliable, exactly-once, low-latency data transformation. Stream the transformed data into Google BigQuery, the cloud-native data warehousing service, for immediate analysis via SQL or popular visualization tools.
From scenario: They plan to deploy the game’s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics.
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users’ mobile devices
6. Use only fully managed services

Reference contents:
Stream analytics solutions


Question 13

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.
Which two steps should be part of their migration plan? (Choose 2 answers)

  • A. Evaluate the impact of migrating their current batch ETL code to Google Cloud Dataflow.
  • B. Write a schema migration plan to denormalize data for better performance in Google BigQuery.
  • C. Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.
  • D. Load 10 TB of analytics data from a previous game into a Google Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.
  • E. Integrate Google Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Google Cloud Storage.

Correct Answer: A, B


Question 14

For this question, refer to the Mountkirk Games case study.
You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games.
Considering the Mountkirk Games business and technical requirements, what should you do?

  • A. Create network load balancers. Use preemptible Google Compute Engine instances.
  • B. Create network load balancers. Use non-preemptible Google Compute Engine instances.
  • C. Create a global load balancer with managed instance groups and auto scaling policies. Use preemptible Google Compute Engine instances.
  • D. Create a global load balancer with managed instance groups and auto scaling policies. Use non-preemptible Google Compute Engine instances.

Correct Answer: C


Question 15

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available.
Which two steps should they take? (Choose 2 answers)

  • A. Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.
  • B. Begin packaging their game backend artifacts in container images and running them on Google Kubernetes Engine to improve the availability to scale up or down based on game activity.
  • C. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.
  • D. Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.
  • E. Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.

Correct Answer: C, E


Question 16

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design a way to test the analytics platform’s resilience to changes in mobile network latency.
What should you do?

  • A. Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.
  • B. Build a test client that can be run from a mobile phone emulator on a Google Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.
  • C. Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.
  • D. Create an opt-in beta of the game that runs on players’ mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.

Correct Answer: C


Question 17

For this question, refer to the Mountkirk Games case study.
You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games.
Considering the business and technical requirements, what should you do?

  • A. Use Google Cloud SQL for time series data, and use Google Cloud Bigtable for historical data queries.
  • B. Use Google Cloud SQL to replace MySQL, and use Google Cloud Spanner for historical data queries.
  • C. Use Google Cloud Bigtable to replace MySQL, and use Google BigQuery for historical data queries.
  • D. Use Google Cloud Bigtable for time series data, use Google Cloud Spanner for transactional data, and use Google BigQuery for historical data queries.

Correct Answer: D


Question 18

For this question, refer to the Mountkirk Games case study.
Which managed storage option meets Mountkirk Games’s technical requirement for storing game activity in a time series database service?

  • A. Google Cloud Bigtable
  • B. Google Cloud Spanner
  • C. Google BigQuery
  • D. Google Cloud Datastore

Correct Answer: A


Question 19

For this question, refer to the Mountkirk Games case study.
You are in charge of the new Game Backend Platform architecture.
The game communicates with the backend over a REST API.
You want to follow Google-recommended practices. How should you design the backend?

  • A. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.
  • B. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.
  • C. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.
  • D. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.

Correct Answer: A

Reference contents:
Choosing a load balancer | Load Balancing
Can I use TCP in a RESTful service?
Load Balancing Layer 4 and Layer 7


Question 20

For this question, refer to the TerramEarth case study.
TerramEarth’s CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the field will have a catastrophic failure.
You want to allow analysts to centrally query the vehicle data.
Which architecture should you recommend?

  • A.
    • Professional Cloud Architect 試験:QUESTION 20:回答 A
  • B.
    • Professional Cloud Architect 試験:QUESTION 20:回答 B
  • C.
    • Professional Cloud Architect 試験:QUESTION 20:回答 C
  • D.
    • Professional Cloud Architect 試験:QUESTION 20:回答 D

Correct Answer: A

The push endpoint can be a load balancer.
|A container cluster can be used.
Google Cloud Pub/Sub for Stream Analytics

Professional Cloud Architect 試験:IoT のアーキテクチャー

Reference contents:
Cloud Pub/Sub | Google
Google Cloud IoT – Fully Managed IoT Services
Designing a Connected Vehicle Platform on Cloud IoT Core | Solutions
Google Says Cloud IoT Core Useful for Connected Vehicle Data Analysis


Question 21

For this question, refer to the TerramEarth case study.
The TerramEarth development team wants to create an API to meet the company’s business requirements.
You want the development team to focus their development effort on business value versus creating a custom framework.
Which method should they use?

  • A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.
  • B. Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.
  • C. Use Google App Engine with the Swagger (Open API Specification) framework. Focus on an API for the public.
  • D. Use Google Container Engine with a Django Python container. Focus on an API for the public.
  • E. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.

Correct Answer: A

Develop, deploy, protect and monitor your APIs with Google Cloud Endpoints. Using an Open API Specification or one of our API frameworks, Google Cloud Endpoints gives you the tools you need for every phase of API development.


Question 22

For this question, refer to the TerramEarth case study.
Your development team has created a structured API to retrieve vehicle data.
They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data.
What should you do?

  • A. Build or leverage an OAuth 2.0 -compatible access control system.
  • B. Build SAML 2.0 SSO compatibility into your authentication system.
  • C. Restrict data access based on the source IP address of the partner systems.
  • D. Create secondary credentials for each dealer that can be given to the trusted third party.

Correct Answer: A

Delegate application authorization with OAuth 2.0.
Google Cloud APIs support OAuth 2.0, and scopes provide granular authorization over the methods that are supported. Google Cloud Platform supports both service- account and user-account OAuth, also called three-legged OAuth.

Reference contents:
Using OAuth 2.0 to Access Google APIs | Google Identity
Authenticating as an end user | Authentication
Creating short-lived service account credentials


Question 23

For this question, refer to the TerramEarth case study.
TerramEarth plans to connect all 20 million vehicles in the field to the cloud.
This increases the volume to 20 million 600 byte records a second for 40 TB an hour.
How should you design the data ingestion?

  • A. Vehicle’s write data directly to Google Cloud Storage.
  • B. Vehicles write data directly to Google Cloud Pub/Sub.
  • C. Vehicles stream data directly to Google BigQuery.
  • D. Vehicles continue to write data using the existing system (FTP).

Correct Answer: C

Streamed data is available for real-time analysis within a few seconds of the first streaming insertion into a table.
Instead of using a job to load data into Google BigQuery, you can choose to stream your data into Google BigQuery one record at a time by using the tabledata().insertAll() method. This approach enables querying data without the delay of running a load job.

Reference contents:
Streaming data into BigQuery


Question 24

For this question, refer to the TerramEarth case study.
You analyzed TerramEarth’s business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customer’s wait time for parts.
You decided to focus on reduction of the 3 weeks aggregate reporting time.
Which modifications to the company’s processes should you recommend?

  • A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.
  • B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.
  • C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
  • D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

Correct Answer: C

The Avro binary format is the preferred format for loading compressed data. Avro data is faster to load because the data can be read in parallel, even when the data blocks are compressed.
Google Cloud Storage supports streaming transfers with the gsutil tool or boto library, based on HTTP chunked transfer encoding. Streaming data lets you stream data to and from your Google Cloud Storage account as soon as it becomes available without requiring that the data be first saved to a separate file. Streaming transfers are useful if you have a process that generates data and you do not want to buffer it locally before uploading it, or if you want to send the result from a computational pipeline directly into Google Cloud Storage.

Reference contents:
Streaming transfers | Cloud Storage
Introduction to loading data | BigQuery


Question 25

For this question, refer to the TerramEarth case study.
Which of TerramEarth’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

  • A. Opex/capex allocation, LAN changes, capacity planning.
  • B. Capacity planning, TCO calculations, opex/capex allocation.
  • C. Capacity planning, utilization measurement, data center expansion.
  • D. Data Center expansion, TCO calculations, utilization measurement.

Correct Answer: B

Reference contents:
Google Cloud for Data Center Professionals: Compute


Question 26

For this question, refer to the TerramEarth case study.
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process.
The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections.
What should you do?

  • A. Use one Google Container Engine cluster of FTP servers. Save the data to a Google Cloud Multi-Regional Storage bucket. Run the ETL process using data in the bucket.
  • B. Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Google Cloud Multi-Regional Storage buckets in US, EU, and Asia. Run the ETL process using the data in the bucket.
  • C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google Cloud APIs over HTTP(S). Run the ETL process using the data in the bucket.
  • D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Correct Answer: D


Question 27

For this question, refer to the TerramEarth case study.
TerramEarth’s 20 million vehicles are scattered around the world.
Based on the vehicle’s location, its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US, Europe, or Asia).
The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data.
What is the most cost-effective way to run this job?

  • A. Move all the data into 1 zone, then launch a Google Cloud Dataproc cluster to run the job.
  • B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job
  • C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Google Cloud Dataproc cluster to finish the job.
  • D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Google Cloud Dataproc cluster to finish the job.

Correct Answer: C

Storage Guarantees 2 replicates which are geo diverse (100 miles apart) which can get better remote latency and availability.
More importantly, multi-regional heavily leverages Edge caching and CDNs to provide the content to the end users.
All this redundancy and caching means that Multiregional comes with overhead to sync and ensure consistency between geo-diverse areas. As such, it’s much better for write-once-read-many scenarios. This means frequently accessed (e.g. “hot” objects) around the world, such as website content, streaming videos, gaming or mobile applications.

Reference contents:
Bucket locations | Cloud Storage
Key terms | Cloud Storage
Google Cloud Storage : What bucket class for the best performance?


Question 28

For this question, refer to the TerramEarth case study.
TerramEarth has equipped all connected trucks with servers and sensors to collect telemetry data.
Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs.
What should they do?

  • A. Have the vehicle’s computer compress the data in hourly snapshots, and store it in a Google Cloud Storage Nearline bucket.
  • B. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.
  • C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Google Cloud Bigtable.
  • D. Have the vehicle’s computer compress the data in hourly snapshots, and store it in a Google Cloud Storage Coldline bucket.

Correct Answer: D

Google Cloud Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs.
For example:
Cold data storage – Archived data, such as data stored for legal or regulatory reasons, can be stored at low cost as Google Cloud Archive Storage, yet still be available if you need it.
Disaster recovery – In the event of a disaster recovery event, recovery time is key. Google Cloud Storage provides low latency access to data stored as Google Cloud Archive Storage.

Reference contents:
Storage classes


Question 29

For this question, refer to the TerramEarth case study.
Your agricultural division is experimenting with fully autonomous vehicles.
You want your architecture to promote strong security during vehicle operation.
Which two architectures should you consider? (Choose 2 answers)

  • A. Treat every micro service call between modules on the vehicle as untrusted.
  • B. Require IPv6 for connectivity to ensure a secure address space.
  • C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
  • D. Use a functional programming language to isolate code execution cycles.
  • E. Use multiple connectivity subsystems for redundancy.
  • F. Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.

Correct Answer: A, C


Question 30

For this question, refer to the TerramEarth case study.
Operational parameters such as oil pressure are adjustable on each of TerramEarth’s vehicles to increase their efficiency, depending on their environmental conditions.
Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field.
How can you accomplish this goal?

  • A. Have you engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.
  • B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
  • C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.
  • D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.

Correct Answer: D


Question 31

For this question, refer to the TerramEarth case study.
To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data.
In the new architecture, this data will be stored in both Google Cloud Storage and Google BigQuery.
What should you do?

  • A. Create a Google BigQuery table for the European data, and set the table retention period to 36 months. For Google Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
  • B. Create a Google BigQuery table for the European data, and set the table retention period to 36 months. For Google Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
  • C. Create a Google BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Google Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
  • D. Create a Google BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Google Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.

Correct Answer: B

Reference contents
Managing tables | BigQuery
Object Lifecycle Management | Cloud Storage


Question 32

For this question, refer to the TerramEarth case study.
TerramEarth has decided to store data files in Google Cloud Storage.
You need to configure Google Cloud Storage lifecycle rules to store 1 year of data and minimize file storage cost.
Which two actions should you take?

  • A. Create a Google Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and Action: “Delete”.
  • B. Create a Google Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and Action: “Set to Nearline”.
  • C. Create a Google Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and Action: “Set to Coldline”.
  • D. Create a Google Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and Action: “Delete”.

Correct Answer: D

Reference contents:
–  Managing object lifecycles | Cloud Storage


Question 33

For this question, refer to the TerramEarth case study.
You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth.
Considering the TerramEarth business and technical requirements, what should you do?

  • A. Replace the existing data warehouse with Google BigQuery. Use table partitioning.
  • B. Replace the existing data warehouse with a Google Compute Engine instance with 96 CPUs.
  • C. Replace the existing data warehouse with Google BigQuery. Use federated data sources.
  • D. Replace the existing data warehouse with a Google Compute Engine instance with 96 CPUs. Add an additional Google Compute Engine preemptible instance with 32 CPUs.

Correct Answer: C


Question 34

For this question, refer to the TerramEarth case study.
A new architecture that writes all incoming data to Google BigQuery has been introduced.
You notice that the data is dirty, and want to ensure data quality on an automated daily basis while managing cost.
What should you do?

  • A. Set up a streaming Google Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Google Cloud Dataflow pipeline.
  • B. Create a Google Cloud Function that reads data from Google BigQuery and cleans it. Trigger it. Trigger the Google Cloud Function from a Google Compute Engine instance.
  • C. Create a SQL statement on the data in Google BigQuery, and save it as a view. Run the view daily, and save the result to a new table.
  • D. Use Google Cloud Dataprep and configure the Google BigQuery tables as the source. Schedule a daily job to clean the data.

Correct Answer: D

Reference contents:
Cleaning data in a data processing pipeline
Running Cloud Dataprep jobs on Cloud Dataflow for more control
Google Cloud Dataprep vs. Google Cloud Dataflow vs. Stitch – Compare features, pricing, services, and more.
ETL Processing on Google Cloud Using Dataflow and BigQuery


Question 35

For this question, refer to the TerramEarth case study.
Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

  • A. Use Google BigQuery as the data warehouse. Connect all vehicles to the network and stream data into Google BigQuery using Google Cloud Pub/Sub and Google Cloud Dataflow. Use Google Data Studio for analysis and reporting.
  • B. Use Google BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Google Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.
  • C. Use Google Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Google Cloud Storage bucket. Upload this data into Google BigQuery using gcloud. Use Google data Studio for analysis and reporting.
  • D. Use Google Cloud Dataproc Hive as the data warehouse. Directly stream data into partitioned Hive tables. Use Pig scripts to analyze data.

Correct Answer: A


Question 36

For this question, refer to the TerramEarth case study.
You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?

  • A. Google Kubernetes Engine with an SSL Ingress.
  • B. Google Cloud IoT Core with public/private key pairs.
  • C. Google Compute Engine with project-wide SSH keys.
  • D. Google Compute Engine with specific SSH keys.

Correct Answer: A


Question 37

For this question, refer to the Dress4Win case study.
The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP).
The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects.
What can they do?

  • A. Grant the operations engineer access to use Google Cloud Shell.
  • B. Configure a VPN connection to GCP to allow SSH access to the cloud VMs.
  • C. Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.
  • D. Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.

Correct Answer: B


Question 38

For this question, refer to the Dress4Win case study.
At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files.
The database files are compressed tar files stored in their current data center.
How should he proceed?

  • A. Create a cron script using gsutil to copy the files to a Google Cloud Coldline Storage bucket.
  • B. Create a cron script using gsutil to copy the files to a Google Cloud Regional Storage bucket.
  • C. Create a Google Cloud Storage Transfer Service Job to copy the files to a Google Cloud Coldline Storage bucket.
  • D. Create a Google Cloud Storage Transfer Service job to copy the files to a Google Cloud Regional Storage bucket.

Correct Answer: A

Follow these rules of thumb when deciding whether to use gsutil or Google Cloud Storage Transfer Service:
– When transferring data from an on-premises location, use gsutil.
– When transferring data from another Google Cloud Storage provider, use Storage Transfer Service.
– Otherwise, evaluate both tools with respect to your specific scenario.
Use this guidance as a starting point.
The specific details of your transfer scenario will also help you determine which tool is more appropriate.

Reference contents:
Overview | Cloud Storage Transfer Service Documentation
Migration to Google Cloud: Transferring your large datasets


Question 39

For this question, refer to the Dress4Win case study.
Dress4Win has asked you to recommend machine types they should deploy their application servers to.
How should you proceed?

  • A. Perform a mapping of the on-premises physical hardware cores and RAM to the nearest machine types in the cloud.
  • B. Recommend that Dress4Win deploy application servers to machine types that offer the highest RAM to CPU ratio available.
  • C. Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the desired performance is reached.
  • D. Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor performance, and scale the machine types up until the desired performance is reached.

Correct Answer: A


Question 40

For this question, refer to the Dress4Win case study.
As part of Dress4Win’s plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load.
They want to ensure that:
– The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day
– Their administrators are notified automatically when their application reports errors.
– They can filter their aggregated logs down in order to debug one piece of the application across many hosts
Which Google StackDriver features should they use?

  • A. Logging, Alerts, Insights, Debug
  • B. Monitoring, Trace, Debug, Logging
  • C. Monitoring, Logging, Alerts, Error Reporting
  • D. Monitoring, Logging, Debug, Error Report

Correct Answer: B


Question 41

For this question, refer to the Dress4Win case study.
Dress4Win would like to become familiar with deploying applications to the cloud by successfully deploying some applications quickly, as is.
They have asked for your recommendation.
What should you advise?

  • A. Identify self-contained applications with external dependencies as a first move to the cloud.
  • B. Identify enterprise applications with internal dependencies and recommend these as a first move to the cloud.
  • C. Suggest moving their in-house databases to the cloud and continue serving requests to on-premise applications.
  • D. Recommend moving their message queuing servers to the cloud and continue handling requests to on-premise applications.

Correct Answer: C

Reference contents:
Migration to Google Cloud: Getting started | Solutions
The five phases of migrating to Google Cloud Platform
Migration Made Easy | Google Cloud Blog


Question 42

For this question, refer to the Dress4Win case study.
Dress4Win has asked you for advice on how to migrate their on-premises MySQL deployment to the cloud.
They want to minimize downtime and performance impact to their on-premises solution during the migration.
Which approach should you recommend?

  • A. Create a dump of the on-premises MySQL master server, and then shut it down, upload it to the cloud environment, and load into a new MySQL cluster.
  • B. Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous replication from the MySQL master server on-premises until cutover.
  • C. Create a new MySQL cluster in the cloud, configure applications to begin writing to both on premises and cloud MySQL masters, and destroy the original cluster at cutover.
  • D. Create a dump of the MySQL replica server into the cloud environment, load it into: Google Cloud Datastore, and configure applications to read/write to Google Cloud Datastore at cutover.

Correct Answer: B


Question 43

For this question, refer to the Dress4Win case study.
Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services.
The Stackdriver dashboard is not reporting the services as healthy.
What should they do?

  • A. Install the Stackdriver agent on all of the legacy web servers.
  • B. In the Google Cloud Console download the list of the uptime servers’ IP addresses and create an inbound firewall rule.
  • C. Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring).
  • D. Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring).

Correct Answer: D

Reference contents:
Managing uptime checks | Cloud Monitoring


Question 44

For this question, refer to the Dress4Win case study.
As part of their new application experience, Dress4Wm allows customers to upload images of themselves.
The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in.
Which configuration should Dress4Win use?

  • A. Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer’s ID and their image files.
  • B. Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Google Cloud Storage that contains the customer’s unique ID.
  • C. Use a distributed file system to store customers’ images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file’s owner attribute, ensuring privacy of images.
  • D. Use a distributed file system to store customers’ images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer’s ID to their image files.

Correct Answer: A


Question 45

For this question, refer to the Dress4Win case study.
Dress4Win has end-to-end tests covering 100% of their endpoints.
They want to ensure that the move to the cloud does not introduce any new bugs.
Which additional testing methods should the developers employ to prevent an outage?

  • A. They should enable Google Stackdriver Debugger on the application code to show errors in the code.
  • B. They should add additional unit tests and production scale load tests on their cloud staging environment.
  • C. They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.
  • D. They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Correct Answer: B


Question 46

For this question, refer to the Dress4Win case study.
You want to ensure Dress4Win’s sales and tax records remain available for infrequent viewing by auditors for at least 10 years.
Cost optimization is your top priority.
Which cloud services should you choose?

  • A. Google Cloud Storage Coldline to store the data, and gsutil to access the data.
  • B. Google Cloud Storage Nearline to store the data, and gsutil to access the data.
  • C. Google Bigtable with US or EU as location to store the data, and gcloud to access the data.
  • D. Google BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Correct Answer: A


Question 47

For this question, refer to the Dress4Win case study.
The current Dress4win system architecture has high latency to some customers because it is located in one data center.
As a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute its system architecture to multiple locations when using Google Cloud Platform.
Which approach should they use?

  • A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
  • B. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines managed by your operations team.
  • C. Use regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
  • D. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance group.

Correct Answer: D


Question 48

For this question, refer to the Dress4Win case study.
Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage.
The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months.
How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

  • A. Migrate the web application layer to Google App Engine, and MySQL to Google Cloud Datastore, and NAS to Google Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Google Cloud Deployment Manager.
  • B. Migrate RabbitMQ to Google Cloud Pub/Sub, Hadoop to Google BigQuery, and NAS to Google Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Google Cloud Deployment Manager.
  • C. Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Google Cloud SQL, RabbitMQ to Google Cloud Pub/Sub, Hadoop to Google Cloud Dataproc, and NAS to Google Compute Engine with Persistent Disk storage.
  • D. Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Google Cloud SQL, RabbitMQ to Google Cloud Pub/Sub, Hadoop to Google Cloud Dataproc, and NAS to Google Cloud Storage.

Correct Answer: C


Question 49

For this question, refer to the Dress4Win case study.
Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

  • A. Deploy Nginx and Tomcat using Google Cloud Deployment Manager to Google Compute Engine. Deploy a Google Cloud SQL server to replace MySQL. Deploy Jenkins using Google Cloud Deployment Manager.
  • B. Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Google Compute Engine using Google Cloud Deployment Manager scripts.
  • C. Migrate Nginx and Tomcat to Google App Engine. Deploy a Google Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Google Compute Engine using Cloud Launcher.
  • D. Migrate Nginx and Tomcat to Google App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Google Compute Engine using Cloud Launcher.

Correct Answer: C


Question 50

For this question, refer to the Dress4Win case study.
Which of the compute services should be migrated as is and would still be an optimized architecture for performance in the cloud?

  • A. Web applications deployed using Google App Engine standard environment.
  • B. RabbitMQ deployed using an unmanaged instance group.
  • C. Hadoop/Spark deployed using Google Cloud Dataproc Regional in High Availability mode.
  • D. Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types.

Correct Answer: A


Question 51

For this question, refer to the Dress4Win case study.
To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.
What should you do?

  • A. Use Stackdriver Trace to create a trace list analysis.
  • B. Use Stackdriver Monitoring to create a dashboard on the project’s activity.
  • C. Enable Google Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.
  • D. Use the Activity page in the Google Cloud Console and Stackdriver Logging to provide the required insight.

Correct Answer: C


Question 52

For this question, refer to the Dress4Win case study.
You are responsible for the security of data stored in Google Cloud Storage for your company, Dress4Win.
You have already created a set of Google Groups and assigned the appropriate users to those groups.
You should use Google best practices and implement the simplest design to meet the requirements.
Considering Dress4Win’s business and technical requirements, what should you do?

  • A. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements. Encrypt data with a customer-supplied encryption key when storing files in Google Cloud Storage.
  • B. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements. Enable default storage encrypted before storing files in Google Cloud Storage.
  • C. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Utilize Google’s default encryption at rest when storing files in Google Cloud Storage.
  • D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Google Cloud Storage.

Correct Answer: A


Question 53

For this question, refer to the Dress4Win case study.
You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.
What change in the on-premises architecture should you make?

  • A. Replace RabbitMQ with Google Pub/Sub.
  • B. Downgrade MySQL to v5.7, which is supported by Google Cloud SQL for MySQL.
  • C. Resize compute resources to match predefined Google Compute Engine machine types.
  • D. Containerize the micro services and host them in Google Kubernetes Engine.

Correct Answer: C


PCA Practice Exam’s Episode

Comments are closed