Snowflake SnowPro Core Certification — Questions 1–10 Practice Questions With Solutions. Part 1 of 10.

Shahab Nasir
5 min readNov 5, 2023

--

Source: Snowflake

In this blog post, I will be sharing 10 practice questions for the Snowflake SnowPro Core Certification exam. By practicing with these questions, you can refine your understanding of Snowflake’s core concepts and increase your chances of success on the certification exam. Answer Key is at the bottom.

This is Part 1.

Part 2 : Questions 11–20

Question 1

Among the options listed, which is not a valid Snowflake Warehouse Size?

A) S

B) XL

C) M

D) XXS

Question 2

Does the Fail-Safe period for temporary and transient tables equal 0?

A) True

B) False

Question 3

Does the time travel feature in Snowflake preserve data at the expense of running continuous backups?

A) True

B) False

Question 4

How many predecessor tasks can a child task have?

A) 1

B) 100

C) 1000

D) 5

Question 5

How frequently does Snowpipe load data into Snowflake?

A) As soon as data files are available in a stage.

B) Once every 1 minute.

C) Once every 5 minutes.

D) When we manually execute the COPY procedure.

Question 6

What types of stages are available in Snowflake?

A) External stages.

B) Mix stages.

C) Internal Stages.

D) Provider Stages.

Question 7

Will Snowpipe reload a file with the same name if it was modified 15 days after the original load and copied over to the stage?

A) No. Snowpipe ignores any files that have already been loaded.

B) Yes it will copy the file causing potential duplicates. This is because Snowpipe keeps history for only 14 days.

Question 8

Are shared databases exclusively read-only databases?

A) Yes

B) No

Question 9

Can a single storage integration support multiple external stages?

A) Yes

B) No

Question 10

Which of the following scenarios represents scaling out for a Snowflake virtual Warehouse?

A) Changing the size of a warehouse from L to XL.

B) Adding a new virtual warehouse of the same size.

C) Adding more data storage to your Snowflake account.

D) Adding more EC2 instances for SQS.

Solutions

Question 1

D) XXS.

Explanation:

There is no XXS (Double Extra Small) Warehouse Size. The smallest available Warehouse Size is X-Small (XS). You can view the various sizes in the provided image or refer to Snowflake’s documentation for more details : https://docs.snowflake.com/en/user-guide/warehouses-overview

Question 2

A) True.

Explanation:

The Fail-Safe retention period for temporary and transient tables in Snowflake is indeed 0. Unlike regular tables, they do not have a Fail-Safe retention period.https://docs.snowflake.com/en/user-guide/tables-temp-transient

Question 3

B) False.

Explanation:

The Time Travel feature in Snowflake allows you to access historical data, including data that has been altered or deleted, within a specified time frame. While using Time Travel may result in additional storage costs, this feature is not dependent on continuous backups like traditional databases.

When any DML operations are performed on a table, Snowflake retains previous versions of the table data for a defined period of time. This enables querying earlier versions of the data using the AT | BEFORE clause. https://docs.snowflake.com/en/user-guide/data-time-travel

Question 4

B) 100

Explanation:

A single task can have a maximum of 100 predecessor tasks and 100 child tasks. https://docs.snowflake.com/en/user-guide/tasks-intro

Question 5

A) As soon as data files are available in a stage.

Explanation:

Snowpipe enables loading data from files as soon as they’re available in a stage. https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro

Question 6

A) External stages.

C) Internal Stages.

Explanation:

In Snowflake, there are primarily two types of stages:

A) External Stages: These stages reference data files that are stored externally, such as in a cloud storage service like AWS S3, Azure Blob Storage, or Google Cloud Storage. C) Internal Stages: These stages store data files internally within Snowflake, making them a part of the Snowflake ecosystem.

https://docs.snowflake.com/en/sql-reference/sql/create-stage

Question 7

B) Yes it will copy the file causing potential duplicates. This is because Snowpipe keeps history for only 14 days.

Explanation:

For Files modified and staged again within 14 days

Snowpipe ignores modified files that are staged again. To reload modified data files, it is currently necessary to recreate the pipe object using the CREATE OR REPLACE PIPE syntax.

For Files modified and staged again after 14 days

Snowpipe loads the data again, potentially resulting in duplicate records in the target table.

https://docs.snowflake.com/en/user-guide/data-load-snowpipe-ts#other-issues

Question 8

A) Yes

Explanation:

All database objects shared between accounts are read-only (i.e. the objects cannot be modified or deleted, including adding or modifying table data).

Sharing accounts cannot modify or delete objects or data within the shared database. The primary purpose of sharing databases is to enable other accounts to query the data without altering it.

https://docs.snowflake.com/en/user-guide/data-sharing-intro

Question 9

A) Yes

Explanation:

A single storage integration can support multiple external stages. The URL in the stage definition must align with the storage location specified for the STORAGE_ALLOWED_LOCATIONS parameter.

STORAGE_ALLOWED_LOCATIONS = (‘cloud_specific_url’)

Explicitly limits external stages that use the integration to reference one or more storage locations (i.e. S3 bucket, GCS bucket, or Azure container). Supports a comma-separated list of URLs for existing buckets and, optionally, paths used to store data files for loading/unloading. Alternatively supports the * wildcard, meaning “allow access to all buckets and/or paths”.

https://docs.snowflake.com/en/sql-reference/sql/create-storage-integration

Question 10

B) Adding a new virtual warehouse of the same size.

Explanation:

Scaling out in Snowflake involves adding more virtual warehouses of the same size to work in parallel. This approach increases concurrency and allows for more simultaneous processing of queries. In contrast, scaling up involves changing the size of an existing warehouse to provide more compute power, while the option related to requesting more data storage from the cloud provider is unrelated to scaling virtual warehouses.

Snowflake supports two ways to scale warehouses:

  • Scale up by resizing a warehouse.
  • Scale out by adding clusters to a multi-cluster warehouse (requires Snowflake Enterprise Edition or higher).

Thanks for Reading!

If you like my work and want to support me:

  1. You can follow me on Medium here.
  2. Feel free to clap if this post is helpful for you! :)

--

--