Sign up for our daily and weekly newsletters to stay updated with the latest AI industry news and exclusive content.
The fast-paced developments in generative AI continue unabated, even as we approach the end of 2023 and the typical holiday slowdown.
Just recently, Microsoft Research, the forward-thinking branch of the software company, announced their new Phi-2 small language model (SML). This AI program, designed to work on laptops and mobile devices, boasts impressive performance despite its compact size.
Phi-2 has 2.7 billion parameters, which are connections between artificial neurons, yet it competes with much larger models like Meta’s Llama 2-7B and Mistral-7B, both of which have 7 billion parameters.
Microsoft Research highlighted that Phi-2 outperforms Google’s new Gemini Nano 2 model, which has half a billion more parameters. Moreover, Phi-2 delivers responses with less toxicity and bias compared to Llama 2.
Microsoft also remarked on Google’s widely criticized demo video for its Gemini Ultra model, which demonstrated solving complex physics problems. Remarkably, Phi-2, despite its smaller size, also managed to answer and correct the same physics problems using identical prompts.
While these results are promising, there is a significant limitation: Phi-2 is currently licensed only for research purposes under a custom Microsoft Research License. This means it cannot be used for commercial applications, disappointing businesses hoping to develop products using this model.