Microsoft phi 3

Microsoft has just launched the next version of its lightweight AI language model, the Phi-3 mini.  It is the first of the three small LLMs that the company plans to release in the future. The Phi-3 mini can measure about 3.8 billion parameters and is trained on a data set that might be smaller than GPT-4.

Microsoft’s compact LLM is now available on Azure, Hugging Face, and Ollama. The tech company plans to release  Phi-3 Small and Phi-3 Medium with 7 billion and 14 billion parameters, respectively.

Here, the parameters mean the complexity of the instructions the model can understand. The company released the Phi-2 back in December and it performed on par with other bigger LLM models like the Llama 2.

Microsoft has said that the Phi-3 will perform much better than the previous versions and can give responses that are close to how a model is 10 times bigger than it would. Eric Boyd, the corporate VP of Azure AI platforms, has stated that Phi-3 mini is as capable as LLMs like GPT-3.5 are, just that it’s in a smaller form factor.

Boyd has also stated that they have trained the Phi-3 with a “curriculum.” They were inspired by how children learned from bedtime stories, books with simpler words, and sentence structures that would talk about large topics in a simple yet understandable manner.

He also stated that the Phi-3 is built on top of what the previous versions have learned. So, while Phi-1 focused on coding and Phi-2 on reasoning, Phi-3 is better at both coding and reasoning.

Related Posts
×