Microsoft strikes back — Phi-3 may change the entire game.
A Highly Capable Language Model Locally on Your Phone! We have already the GGUF binaries, so let’s do it!
This is exactly the title of the official Paper that introduces to the World the latest born from Microsoft powerhouse.
Just one week after the shocking release of the Llama-3 series, Microsoft announce a family of so-called Small Language Models aimed to run on mobile devices.
Is it going to work though? Can we test it?
Keep up with the article because we try to unveil details and the easiest way to run it right now on your Laptop too.
Phi-3-mini
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone.