{"id":39977,"date":"2025-04-30T17:00:00","date_gmt":"2025-05-01T00:00:00","guid":{"rendered":"https:\/\/azure.microsoft.com\/en-us\/blog\/?p=39977"},"modified":"2025-06-05T14:47:18","modified_gmt":"2025-06-05T21:47:18","slug":"one-year-of-phi-small-language-models-making-big-leaps-in-ai","status":"publish","type":"post","link":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/","title":{"rendered":"One year of Phi: Small language models making big leaps in AI"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\"><strong>A new era of AI&nbsp;<\/strong><\/p>\n\n\n\n<p class=\"wp-block-paragraph\">One year ago, Microsoft introduced <strong>small language models<\/strong> (SLMs) to customers with the release of <strong>Phi-3<\/strong> on <a href=\"https:\/\/ai.azure.com\/?tid=72f988bf-86f1-41af-91ab-2d7cd011db47\">Azure AI Foundry<\/a>, leveraging research on SLMs to expand the range of efficient AI models and tools available to customers.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Today, we are excited to introduce <strong>Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning<\/strong>\u2014marking a new era for small language models and once again redefining what is possible with small and efficient AI.&nbsp;<\/p>\n\n\n\n<aside class=\"cta-block cta-block--align-left cta-block--has-image wp-block-msx-cta\" data-bi-an=\"CTA Block\">\n\t<div class=\"cta-block__content\">\n\t\t\t\t\t<div class=\"cta-block__image-container\">\n\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/01\/MSC24-ASEAN-developer-Getty-1336501076-rgb-1024x683.jpg\" class=\"cta-block__image\" alt=\"A man sitting at a desk with a computer\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/01\/MSC24-ASEAN-developer-Getty-1336501076-rgb-1024x683.jpg 1024w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/01\/MSC24-ASEAN-developer-Getty-1336501076-rgb-300x200.jpg 300w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/01\/MSC24-ASEAN-developer-Getty-1336501076-rgb-768x512.jpg 768w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/01\/MSC24-ASEAN-developer-Getty-1336501076-rgb-1536x1025.jpg 1536w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/>\t\t\t<\/div>\n\t\t\n\t\t<div class=\"cta-block__body\">\n\t\t\t<h2 class=\"cta-block__headline\">Azure AI Foundry<\/h2>\n\t\t\t<p class=\"cta-block__text\">Find the ideal model for your business needs, then tinker, tweak, and customize within a project to achieve all your AI goals.<\/p>\n\t\t\t\t\t\t\t<div class=\"cta-block__actions\">\n\t\t\t\t\t<a\n\t\t\t\t\t\thref=\"https:\/\/ai.azure.com\/?tid=72f988bf-86f1-41af-91ab-2d7cd011db47\"\n\t\t\t\t\t\tclass=\"btn cta-block__link btn-link\"\n\t\t\t\t\t\t\t\t\t\t\t>\n\t\t\t\t\t\tDiscover more\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t<\/div>\n<\/aside>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"reasoning-models-the-next-step-forward\">Reasoning models, the next step forward<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Reasoning models<\/strong> are trained to leverage inference-time scaling to perform complex tasks that demand multi-step decomposition and internal reflection. They excel in mathematical reasoning and are emerging as the backbone of agentic applications with complex, multi-faceted tasks. Such capabilities are typically found only in large frontier models.&nbsp;Phi-reasoning models introduce a new category of small language models. Using distillation, reinforcement learning, and high-quality data, these models balance size and performance. They are small enough for low-latency environments yet maintain strong reasoning capabilities that rival much bigger models. This blend allows even resource-limited devices to perform complex reasoning tasks efficiently.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"phi-4-reasoning-and-phi-4-reasoning-plus\">Phi-4-reasoning and Phi-4-reasoning-plus&nbsp;<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Phi-4-reasoning <\/strong>is a 14-billion parameter open-weight reasoning model that rivals much larger models on complex reasoning tasks. Trained via supervised fine-tuning of Phi-4 on carefully curated reasoning demonstrations from OpenAI o3-mini, Phi-4-reasoning generates detailed reasoning chains that effectively leverage additional inference-time compute. The model demonstrates that meticulous data curation and high-quality synthetic datasets allow smaller models to compete with larger counterparts.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Phi-4-reasoning-plus<\/strong> builds upon Phi-4-reasoning capabilities, further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Despite their significantly smaller size, both models achieve better performance than OpenAI o1-mini and DeepSeek-R1-Distill-Llama-70B at most benchmarks, including mathematical reasoning and Ph.D. level science questions. They achieve performance better than the full DeepSeek-R1 model (with 671-billion parameters) on the AIME 2025 test, the 2025 qualifier for the USA Math Olympiad. Both models are available on <a href=\"https:\/\/ai.azure.com\/explore\/models\/Phi-4-reasoning\/version\/1\/registry\/azureml?tid=72f988bf-86f1-41af-91ab-2d7cd011db47\">Azure AI Foundry<\/a> and HuggingFace, <a href=\"https:\/\/huggingface.co\/microsoft\/Phi-4-reasoning\">here<\/a> and <a href=\"https:\/\/huggingface.co\/microsoft\/Phi-4-reasoning-plus\">here<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/image-9-1024x416.webp\" alt=\"A graph of different colored bars\" class=\"wp-image-40129\" \/><figcaption class=\"wp-element-caption\">Figure 1.\u202fPhi-4-reasoning performance across representative reasoning benchmarks spanning mathematical and scientific reasoning. We illustrate the performance gains from reasoning-focused post-training of Phi-4 via Phi-4-reasoning (SFT) and Phi-4-reasoning-plus (SFT+RL), alongside a representative set of baselines from two model families: open-weight models from DeepSeek including DeepSeek R1 (671B Mixture-of-Experts) and its distilled dense variant DeepSeek-R1 Distill Llama 70B, and OpenAI\u2019s proprietary frontier models o1-mini and o3-mini. Phi-4-reasoning and Phi-4-reasoning-plus consistently outperform the base model Phi-4 by significant margins, exceed DeepSeek-R1 Distill Llama 70B (5x larger)\u202fand demonstrate competitive performance against significantly larger models such as Deepseek-R1.<\/figcaption><\/figure>\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2-1024x359.webp\" alt=\"A graph of numbers and a number of people\" class=\"wp-image-40018 webp-format\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2-1024x359.webp 1024w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2-300x105.webp 300w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2-768x269.webp 768w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2-1536x539.webp 1536w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2.webp 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/05\/image-2-1024x359.webp\"><figcaption class=\"wp-element-caption\">Figure 2. Accuracy of models across general-purpose benchmarks for: long input context&nbsp;QA (FlenQA), instruction following (IFEval), Coding (HumanEvalPlus), knowledge &amp; language understanding (MMLUPro), safety detection (ToxiGen), and other general skills (ArenaHard and PhiBench).&nbsp;<\/figcaption><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Phi-4-reasoning models introduce a major improvement over Phi-4, surpass larger models like DeepSeek-R1-Distill-70B and approach Deep-Seek-R1 across various reasoning and general capabilities, including math, coding, algorithmic problem solving, and planning. The <a href=\"https:\/\/aka.ms\/phi-reasoning\/techreport\" target=\"_blank\" rel=\"noreferrer noopener\">technical report<\/a> provides extensive quantitative evidence of these improvements through diverse reasoning tasks.<\/p>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading\" id=\"phi-4-mini-reasoning\">Phi-4-mini-reasoning<\/h2>\n<\/div>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<p class=\"wp-block-paragraph\"><strong>Phi-4-mini-reasoning<\/strong> is designed to meet the demand for a compact reasoning model. This transformer-based language model is optimized for mathematical reasoning, providing high-quality, step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by Deepseek-R1 model, Phi-4-mini-reasoning balances efficiency with advanced reasoning ability. It&#8217;s ideal for educational applications, embedded tutoring, and lightweight deployment on edge or mobile systems, and is trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D. level.&nbsp;Try out the model on <a href=\"https:\/\/huggingface.co\/microsoft\/Phi-4-mini-reasoning\/blob\/main\/Phi-4-Mini-Reasoning.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Azure AI Foundry<\/a> or <a href=\"https:\/\/aka.ms\/phi4-mini-reasoning\/hf\">HuggingFace<\/a> today.<\/p>\n<\/div>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Screenshot-2025-04-30-193715-1024x328.webp\" alt=\"A graph of numbers and a number of marks\" class=\"wp-image-40226\" \/><figcaption class=\"wp-element-caption\">Figure 3. The graph compares the performance of various models on popular math benchmarks for long sentence generation. Phi-4-mini-reasoning outperforms its base model on long sentence generation across each evaluation, as well as larger models like OpenThinker-7B, Llama-3.2-3B-instruct, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Llama-8B, and Bespoke-Stratos-7B. Phi-4-mini-reasoning is comparable to OpenAI o1-mini across math benchmarks, surpassing the model\u2019s performance during Math-500 and GPQA Diamond evaluations. As seen above, Phi-4-mini-reasoning with 3.8B parameters outperforms models of over twice its size.\u202f<\/figcaption><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">For more information about the model, read the\u202f<a href=\"https:\/\/arxiv.org\/pdf\/2504.21233\" target=\"_blank\" rel=\"noreferrer noopener\">technical report<\/a> that provides additional quantitative insights.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"small-footprint-big-impact-phi-s-evolution\">Phi reasoning models in action&nbsp;<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Phi\u2019s evolution over the last year has continually pushed this envelope of quality vs. size, expanding the family with new features to address diverse needs.&nbsp;Across the scale of Windows 11 devices, these models are available to run locally on CPUs and GPUs.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">As Windows works towards creating a new type of PC, Phi models have become an integral part of Copilot+ PCs with the NPU-optimized <a href=\"https:\/\/blogs.windows.com\/windowsexperience\/2024\/12\/06\/phi-silica-small-but-mighty-on-device-slm\/\" target=\"_blank\" rel=\"noreferrer noopener\">Phi Silica variant<\/a>. This highly efficient and OS-managed version of Phi is designed to be preloaded in memory, and available with blazing fast time to first token responses, and power efficient token throughput so it can be concurrently invoked with other applications running on your PC.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It is used in core experiences like <a href=\"https:\/\/support.microsoft.com\/en-us\/windows\/click-to-do-do-more-with-what-s-on-your-screen-6848b7d5-7fb0-4c43-b08a-443d6d3f5955\" target=\"_blank\" rel=\"noreferrer noopener\">Click to Do<\/a>, providing useful text intelligence tools for any content on your screen, and is available as <a href=\"https:\/\/learn.microsoft.com\/en-us\/windows\/ai\/apis\/phi-silica?tabs=csharp0%2Ccsharp1%2Ccsharp2%2Ccsharp3\" target=\"_blank\" rel=\"noreferrer noopener\">developer APIs<\/a> to be readily integrated into applications\u2014already being used in several productivity applications like Outlook, offering its Copilot summary features offline.&nbsp;These small but mighty models have already been optimized and integrated to be used across several applications across the breadth of our PC ecosystem.&nbsp;The Phi-4-reasoning and Phi-4-mini-reasoning models leverage the low-bit optimizations for Phi Silica and will be available to run soon on Copilot+ PC NPUs.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong><em>Update: May 15, 2025:<\/em><\/strong><em>&nbsp;Today, we are pleased to announce that the Phi-4-reasoning and Phi-4-mini-reasoning models optimized using ONNX are now available to use on your Snapdragon powered Copilot+ PCs. By offloading these models to the Neural Processing Unit (NPU), inference-time compute consumes significantly less power. This allows reasoning models such as Phi-4 take advantage of inference-time compute for higher accuracy more efficiently. <u><a href=\"https:\/\/learn.microsoft.com\/en-us\/windows\/ai\/toolkit\/toolkit-getting-started?utm_source=chatgpt.com&amp;tabs=rest\">Get started today by downloading the AI Toolkit extension in VS Code<\/a><\/u>.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"safety-and-microsoft-s-approach-to-responsible-ai\">Safety and Microsoft\u2019s approach to responsible AI&nbsp;<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">At Microsoft, <a href=\"https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai?msockid=2e923f4e6e1064c017fe2d466fa365a3\" target=\"_blank\" rel=\"noreferrer noopener\">responsible AI<\/a> is a fundamental principle guiding the development and deployment of AI systems, including our Phi models. Phi models are developed in accordance with Microsoft AI principles: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The Phi family of models has adopted a robust safety post-training approach, leveraging a combination of Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Learning from Human Feedback (RLHF) techniques. These methods utilize various datasets, including publicly available datasets focused on helpfulness and harmlessness, as well as various safety-related questions and answers. While the Phi family of models is designed to perform a wide range of tasks effectively, it is important to acknowledge that all AI models may exhibit limitations. To better understand these limitations and the measures in place to address them, please refer to the model cards below, which provide detailed information on responsible AI practices and guidelines.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-a89b3969 wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/ai\/responsible-ai?msockid=2e923f4e6e1064c017fe2d466fa365a3\">Responsible AI at Microsoft<\/a><\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"learn-more-here\">Learn more here:&nbsp;<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Try out the new models on <a href=\"https:\/\/aka.ms\/try-phi\" target=\"_blank\" rel=\"noreferrer noopener\">Azure AI Foundry<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Read the <a href=\"https:\/\/aka.ms\/phicookbook\" target=\"_blank\" rel=\"noreferrer noopener\">Phi Cookbook<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Read about <a href=\"https:\/\/aka.ms\/PhiReasoningEdge\" target=\"_blank\" rel=\"noreferrer noopener\">Phi reasoning models on edge devices<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Learn more about <a href=\"https:\/\/huggingface.co\/microsoft\/Phi-4-mini-reasoning\" target=\"_blank\" rel=\"noreferrer noopener\">Phi-4-mini-reasoning<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Learn more about <a href=\"https:\/\/huggingface.co\/microsoft\/Phi-4-reasoning\" target=\"_blank\" rel=\"noreferrer noopener\">Phi-4-reasoning<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Learn more about <a href=\"https:\/\/huggingface.co\/microsoft\/Phi-4-reasoning-plus\" target=\"_blank\" rel=\"noreferrer noopener\">Phi-4-reasoning-plus<\/a>.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Read more about Phi reasoning on the <a href=\"https:\/\/techcommunity.microsoft.com\/blog\/educatordeveloperblog\/showcasing-phi-4-reasoning-a-game-changer-for-ai-developers\/4409892\">Educators Developer blog<\/a>.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.<\/p>\n","protected":false},"author":45,"featured_media":40037,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ms_queue_id":["aiblog-content-sync"],"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","_alt_title":"","footnotes":"","msx_community_cta_settings":[]},"categories":[1454],"tags":[2671,2853,3167],"audience":[3072],"content-type":[1465],"product":[3164],"tech-community":[2993],"topic":[],"coauthors":[3174,3230],"class_list":["post-39977","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-machine-learning","tag-ai","tag-phi-3","tag-small-language-models-slms","audience-ai-professionals","content-type-announcements","product-microsoft-foundry","review-flag-1680286581-56","review-flag-1680286581-364","review-flag-1-1680286581-825","review-flag-2-1680286581-601","review-flag-3-1680286581-173","review-flag-4-1680286581-250","review-flag-lever-1680286579-649","review-flag-microsofts","review-flag-new-1680286579-546"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog<\/title>\n<meta name=\"description\" content=\"Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.\u00a0Learn more.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog\" \/>\n<meta property=\"og:description\" content=\"Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.\u00a0Learn more.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Azure Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/microsoftazure\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-01T00:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-05T21:47:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Weizhu Chen, Ece Kamar\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@azure\" \/>\n<meta name=\"twitter:site\" content=\"@azure\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Weizhu Chen, Ece Kamar\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\"},\"author\":[{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/weizhu-chen\/\",\"@type\":\"Person\",\"@name\":\"Weizhu Chen\"},{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/ece-kumar\/\",\"@type\":\"Person\",\"@name\":\"Ece Kamar\"}],\"headline\":\"One year of Phi: Small language models making big leaps in AI\",\"datePublished\":\"2025-05-01T00:00:00+00:00\",\"dateModified\":\"2025-06-05T21:47:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\"},\"wordCount\":1259,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp\",\"keywords\":[\"AI\",\"Phi-3\",\"Small language models (SLMs)\"],\"articleSection\":[\"AI + machine learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\",\"name\":\"One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp\",\"datePublished\":\"2025-05-01T00:00:00+00:00\",\"dateModified\":\"2025-06-05T21:47:18+00:00\",\"description\":\"Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.\u00a0Learn more.\",\"breadcrumb\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp\",\"width\":1920,\"height\":1080,\"caption\":\"A white and green paper with green text\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog home\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI + machine learning\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"One year of Phi: Small language models making big leaps in AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"name\":\"Microsoft Azure Blog\",\"description\":\"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.\",\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\",\"name\":\"Microsoft Azure Blog\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"width\":512,\"height\":512,\"caption\":\"Microsoft Azure Blog\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/microsoftazure\",\"https:\/\/x.com\/azure\",\"https:\/\/www.instagram.com\/microsoftdeveloper\/\",\"https:\/\/www.linkedin.com\/company\/16188386\",\"https:\/\/www.youtube.com\/user\/windowsazure\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/c202d869dd6f3cb29ea80999e19313a9\",\"name\":\"Jordan Davis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g4accb07cb584a4dd53673b002bf33930\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g\",\"caption\":\"Jordan Davis\"},\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/jordandavis\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog","description":"Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.\u00a0Learn more.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/","og_locale":"en_US","og_type":"article","og_title":"One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog","og_description":"Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.\u00a0Learn more.","og_url":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/","og_site_name":"Microsoft Azure Blog","article_publisher":"https:\/\/www.facebook.com\/microsoftazure","article_published_time":"2025-05-01T00:00:00+00:00","article_modified_time":"2025-06-05T21:47:18+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.png","type":"image\/png"}],"author":"Weizhu Chen, Ece Kamar","twitter_card":"summary_large_image","twitter_creator":"@azure","twitter_site":"@azure","twitter_misc":{"Written by":"Weizhu Chen, Ece Kamar","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#article","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/"},"author":[{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/weizhu-chen\/","@type":"Person","@name":"Weizhu Chen"},{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/ece-kumar\/","@type":"Person","@name":"Ece Kamar"}],"headline":"One year of Phi: Small language models making big leaps in AI","datePublished":"2025-05-01T00:00:00+00:00","dateModified":"2025-06-05T21:47:18+00:00","mainEntityOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/"},"wordCount":1259,"commentCount":0,"publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp","keywords":["AI","Phi-3","Small language models (SLMs)"],"articleSection":["AI + machine learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/","name":"One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp","datePublished":"2025-05-01T00:00:00+00:00","dateModified":"2025-06-05T21:47:18+00:00","description":"Microsoft continues to add to the conversation by unveiling its newest models, Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.\u00a0Learn more.","breadcrumb":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#primaryimage","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Azure_1053431_Blog_250429.webp","width":1920,"height":1080,"caption":"A white and green paper with green text"},{"@type":"BreadcrumbList","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/one-year-of-phi-small-language-models-making-big-leaps-in-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog home","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/"},{"@type":"ListItem","position":2,"name":"AI + machine learning","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/"},{"@type":"ListItem","position":3,"name":"One year of Phi: Small language models making big leaps in AI"}]},{"@type":"WebSite","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","name":"Microsoft Azure Blog","description":"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.","publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization","name":"Microsoft Azure Blog","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","width":512,"height":512,"caption":"Microsoft Azure Blog"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/microsoftazure","https:\/\/x.com\/azure","https:\/\/www.instagram.com\/microsoftdeveloper\/","https:\/\/www.linkedin.com\/company\/16188386","https:\/\/www.youtube.com\/user\/windowsazure"]},{"@type":"Person","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/c202d869dd6f3cb29ea80999e19313a9","name":"Jordan Davis","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g4accb07cb584a4dd53673b002bf33930","url":"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ec9971e70dcc01d0fb3aee74bf0f300b2dc40f42a228ed523c90f16cae07c017?s=96&d=mm&r=g","caption":"Jordan Davis"},"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/jordandavis\/"}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Azure Blog","distributor_original_site_url":"https:\/\/azure.microsoft.com\/en-us\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/39977","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/users\/45"}],"replies":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/comments?post=39977"}],"version-history":[{"count":38,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/39977\/revisions"}],"predecessor-version":[{"id":41035,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/39977\/revisions\/41035"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media\/40037"}],"wp:attachment":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media?parent=39977"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/categories?post=39977"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tags?post=39977"},{"taxonomy":"audience","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/audience?post=39977"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/content-type?post=39977"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/product?post=39977"},{"taxonomy":"tech-community","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tech-community?post=39977"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/topic?post=39977"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/coauthors?post=39977"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}