[2024] Snowflake Cortex — COMPLETE Function

Given a prompt, the instruction-following COMPLETE function generates a response using your choice of language model. In the simplest use case, the prompt is a single string. You may also provide a conversation including multiple prompts and responses for interactive chat-style usage, and in this form of the function you can also specify hyperparameter options to customize the style and size of the output.

For a given prompt, the COMPLETE function generates a response using the LLM model you prefer to use.

You can a simplest single string or can provide a conversation including multiple prompts and responses for interactive chat-style usage, and in this form of the function you can also specify hyperparameter options to customize the style and size of the output.

Supported Models:

  • mistral-large
  • mixtral-8x7b
  • llama2–70b-chat
  • gemma-7b
  • mistral-7b

Syntax

SNOWFLAKE.CORTEX.COMPLETE(model, prompt_or_history [, options])

where:

Model is which LLM model you define to use.

Prompt or History is to be used to generate a completion. If you pass options then it should be an array of objects which will represent a conversation in chronological order, else it should be a string.

For array Prompt, each object must have a role key and content key. The content is basically a response, which depends on the role key.

Options are used to define the model’s hyperparameters. We can define temperature, top_p or max_tokens.

Temperature is a value which controls the randomness of the output of the model.

Top-p is a value that controls the randomness and diversity of the language model. Sounds similar to the Temperature, right? The major difference is top_p restricts the set of possible tokens that the model outputs, while temperature influences which tokens are chosen at each step.

Max_tokens is maximum number of output tokens in the response, which means if you have small number, the response will be truncated.

Function Output

If we do not define the options in arguments, then the output will be a string. However, when options are defined then the return output is a string representation of a JSON object that contains:

choices : An array of the model’s responses.

created: UNIX timestamp

model: Name of the model used

usage: Details about number of tokens consumed and generated by prompt completion

Usage

You can use it you ask a prompt based on the input by the user OR you can also use your data which resides inside snowflake table.

SELECT
SNOWFLAKE.CORTEX.COMPLETE(
'llama2-70b-chat',
[
{
'role': 'user',
'content': 'Suggest me How can I use Llama2 LLM in my routine tasks?'
}
],
{ 'temperature': 0.8,
'max_tokens': 1500 }
);

Below is the output

{ “choices”: [ { “messages”: “ Llama2 is a powerful language model that can be used in various ways to automate and streamline your routine tasks. Here are some suggestions on how you can use Llama2 LLM in your daily tasks:\n\n1. Chatbots: You can use Llama2 to build chatbots that can handle customer queries, provide support, and answer frequently asked questions. This can save a lot of time and resources for your customer support team.\n2. Automated writing: Llama2 can be used to generate automated responses to emails, messages, and other written communication. This can save you time and help you avoid repetitive tasks.\n3. Data analysis: Llama2 can be used to analyze large datasets and extract insights, which can be useful in various industries such as finance, marketing, and healthcare.\n4. Content creation: Llama2 can be used to generate content, such as articles, blog posts, and social media posts. This can save you time and help you maintain a consistent content calendar.\n5. Language translation: Llama2 can be used to translate text from one language to another. This can be useful for businesses that operate globally and need to communicate with customers in different languages.\n6. Sentiment analysis: Llama2 can be used to analyze customer feedback, reviews, and social media posts to understand public opinion and sentiment. This can help businesses improve their products and services.\n7. Summarization: Llama2 can be used to summarize long documents, such as reports, articles, and emails. This can save you time and help you understand the main points of a document quickly.\n8. Question answering: Llama2 can be used to build systems that can answer questions based on the information it has been trained on. This can be useful for customer support, trivia games, and other applications.\n9. Text classification: Llama2 can be used to classify text into categories, such as spam vs. non-spam emails, positive vs. negative reviews, and relevant vs. irrelevant content.\n10. Generative tasks: Llama2 can be used for generative tasks such as creative writing, poetry, and art. This can be a fun way to explore your creativity and generate new ideas.\n\nThese are just a few examples of how you can use Llama2 LLM in your routine tasks. The possibilities are endless, and it’s up to your imagination and creativity to find new and innovative ways to use this powerful language model.” } ],

created”: 1709705481,

model”: “llama2–70b-chat”,

usage”: { “completion_tokens”: 551, “prompt_tokens”: 28, “total_tokens”: 579 } }

About Me:

Hi there! I am Divyansh Saxena

I am an experienced Cloud Data Engineer with a proven track record of success in Snowflake Data Cloud technology. Highly skilled in designing, implementing, and maintaining data pipelines, ETL workflows, and data warehousing solutions. Possessing advanced knowledge of Snowflake’s features and functionality, I am a Snowflake Data Superhero & Snowflake Snowpro Core SME. With a major career in Snowflake Data Cloud, I have a deep understanding of cloud-native data architecture and can leverage it to deliver high-performing, scalable, and secure data solutions.

Follow me on Medium for regular updates on Snowflake Best Practices and other trending topics:

Also, I am open to connecting all data enthusiasts across the globe on LinkedIn:

https://www.linkedin.com/in/divyanshsaxena/

--

--