NexusRaven V2 13B Surpasses GPT-4 in Function Calling for Single, Nested and Parallel Calls
And no, it didn’t leverage any GPT-3.5/4 generated data and can generalize to tools never seen during model training!
Too good to be true? Before we get hands-on with this beast, let me explain why this is a big deal.
Join our next cohort: Full-stack GenAI SaaS Product in 4 weeks!
Translating English instructions into executable code isn’t new. But for the first time, an open-source model isn’t just matching, but surpassing a commercial giant like GPT-4. And we’re not talking simple function calls — NexusRaven v2 13B handles nested and composite functions with ease.
Derived from Meta’s CodeLlama-13B-instruct, NexusRaven V2 13B is fine-tuned with data exclusively from open-source code. It opens up endless opportunities for local and cloud based applications.
What’s also really cool is that the team behind it has set up a leaderboard on Huggingface, showcasing a diverse array of function-calling scenarios!
In this article, I’ll walk you through:
- Local setup for NexusRaven v2 13B
- Function Calling with NexusRaven v2 13B
- Benchmarks for NexusRaven v2 13B vs. GPT-4 1106 and GPT-3.5
- Use-case ideas
- Resources in case you want to dive deeper