The day has finally come — A highly performant and efficient open-source multilingual language model has arrived!
On June 27, 2024, Google DeepMind announced the official release of Gemma 2, available in 9 billion (9B) and 27 billion (27B) parameter sizes.
These models represent a leap forward in AI technology, delivering high performance and efficiency — previously only achieved using much larger closed-source models.
In this article, I analyze the multilingual understanding of Gemma 2 (9B) and Gemma 2 (27B) compared with state-of-the-art open-source and closed-sourced LLMs:
- Llama 3 (8B) (Released April 18, 2024) (Blog post)
- Phi3 (14B) (Released Apr 23, 2024) (Blog post)
- Qwen2 (7B) (Released June 7, 2024) (Blog post)
- OpenAI’s gpt-3.5-turbo
- Google Gemini 1.0 Pro & Gemini 1.5 Flash
- Anthropic’s Claude-3-haiku