Microsoft Orca: Leading the Way in AI Reasoning

Image generated by DeepAI

The partnership between Microsoft and OpenAI to implement and develop AI capabilities holds promising results for the next generation of AI. A notable example of this progress is Microsoft Research’s recent introduction of a new AI model called Orca (Optimizing Reasoning with Common Sense and Attention). Orca learns by imitating the reasoning process of large language models like GPT-4. However, the interesting fact about Orca is that it does not require high computational power or excessive resources to run and operate efficiently.

The Microsoft research team recently published a separate paper about Orca, unveiling several more intriguing details. Orca is an AI model with a capacity of 13 billion parameters and is built upon Vicuna, a foundational framework. Leveraging the power of GPT-4, Orca demonstrates the ability to learn intricate step-by-step reasoning processes, provide explanations, and comprehend complex instructions.

Explanation tuning

In the explanation process Orca outperform the GPT 4 and it learns the detailing explanation by dive deep into the GPT 4 reasoning. Orca search their explanations by dive into the collection of flan 2022 collection. Orca reasoning process evaluated against the text davinci 003, chatgpt, chatgpt 4 and vicuna and Orca outperform all of them. This model shows impressive performance on zero shot reasoning benchmarks.

As an example, if user asked β€œwho leads India?” in a prompt and Orca will be able answer that question. That’s simple. But then if user asked thinking steps, Orca will be able answer that question using the logical reasoning with the informative sources.

Benchmarks

The Microsoft research team evaluated the Orca by using the zero shot by big bench hard with standard zero shot prompting and following differences were identified of Chatgpt and Orca.

Entailment and Semantic Understanding:

Orca performs better at entailment (formal fallacies) and semantic understanding (Disambiguation QA and Snarks).

Temporal and Spatial Reasoning:
Orca shows substantially better reasoning capabilities in terms of temporal reasoning, spatial reasoning and color based reasoning compared to ChatGPT.

Causal Judgment:
Orca shows good performance on the causal judgement task, which measures the capability of the model to answer a causal question about a short story.

Multilingual Understanding:
Orca and ChatGPT achieve parity on the salient translation error detection task (determining the type of translation error in the translated sentence).

World Knowledge:
Orca underperforms ChatGPT for tasks that require world knowledge (e.g. sports, artists,
humor, etc.) while doing better with movie recommendation.

Logical and Geometric Reasoning :
ChatGPT shows superior logical reasoning capabilities compared to Orca.

Table Understanding:
ChatGPT has better table understanding and reasoning capabilities than Orca.

Conclusion

Microsoft Research has introduced Orca, an AI model that learns by imitating the reasoning process of large language models like GPT-4. Orca is built upon Vicuna, a foundational framework, and has a capacity of 13 billion parameters. It outperforms GPT-4 in explanation tuning and outperforms ChatGPT, ChatGPT, and Vicuna in zero-shot reasoning benchmarks.

References:

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

If you like the article and would like to support me, make sure to:

Also, Read

Follow our Social Accounts- Facebook/Instagram/Linkedin/Twitter

Join AImonks Youtube Channel to get interesting videos.

--

--