{"id":39569,"date":"2025-04-05T19:51:26","date_gmt":"2025-04-06T02:51:26","guid":{"rendered":""},"modified":"2025-04-07T07:34:11","modified_gmt":"2025-04-07T14:34:11","slug":"introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks","status":"publish","type":"post","link":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/","title":{"rendered":"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\">We are excited to share the first models in the Llama 4 herd are available today in <a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-foundry\/\">Azure AI Foundry<\/a> and <a href=\"https:\/\/aka.ms\/Llama-4ADB\">Azure Databricks<\/a>, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-a89b3969 wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link has-text-align-center wp-element-button\" href=\"https:\/\/ai.azure.com\/\">Create with Azure AI Foundry<\/a><\/div>\n<\/div>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Today, we are bringing Meta&#8217;s Llama 4 Scout and Maverick models into Azure AI Foundry as managed compute offerings:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"> <strong>Llama 4 Scout Models<\/strong>\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Llama-4-Scout-17B-16E<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Llama-4-Scout-17B-16E-Instruct<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Llama 4 Maverick Models<\/strong>\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Llama 4-Maverick-17B-128E-Instruct-FP8<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Azure AI Foundry is designed for multi-agent use cases, enabling seamless collaboration between different AI agents. This opens up new frontiers in AI applications, from complex problem-solving to dynamic task management. Imagine a team of AI agents working together to analyze vast datasets, generate creative content, and provide real-time insights across multiple domains. The possibilities are endless.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"520\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Benchmarks-1.jpg\" alt=\"Model ecosystem benchmark comparison graphic provided by Meta\" class=\"wp-image-39571\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Benchmarks-1.jpg 800w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Benchmarks-1-300x195.jpg 300w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Benchmarks-1-768x499.jpg 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">To accommodate a range of use cases and developer needs, Llama 4 models come in both smaller and larger options. These models integrate mitigations at every layer of development, from pre-training to post-training. Tunable system-level mitigations shield developers from adversarial users, empowering them to create helpful, safe, and adaptable experiences for their Llama-supported applications.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"llama-4-scout-models-power-and-precision\">Llama 4 Scout models: Power and precision<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">We\u2019re sharing the first models in the Llama 4 herd, which will enable people to build more personalized multimodal experiences. According to Meta, Llama 4 Scout is one of the best multimodal models in its class and is more powerful than Meta\u2019s Llama 3 models, while fitting in a single H100 GPU. And Llama4 Scout increases the supported context length from 128K in Llama 3 to an industry-leading 10 million tokens. This opens up a world of possibilities, including multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast codebases.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Targeted use cases include summarization, personalization, and reasoning. Thanks to its long context and efficient size, Llama 4 Scout shines in tasks that require condensing or analyzing extensive information. It can generate summaries or reports from extremely lengthy inputs, personalize its responses using detailed user-specific data (without forgetting earlier details), and perform complex reasoning across large knowledge sets. <\/p>\n\n\n\n<p class=\"wp-block-paragraph\">For example, Scout could analyze all documents in an enterprise SharePoint library to answer a specific query or read a multi-thousand-page technical manual to provide troubleshooting advice. It\u2019s designed to be a diligent \u201cscout\u201d that traverses vast information and returns the highlights or answers you need.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"llama-4-maverick-models-innovation-at-scale\">Llama 4 Maverick models: Innovation at scale<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">As a general-purpose LLM, Llama 4 Maverick contains 17 billion active parameters, 128 experts, and 400 billion total parameters, offering high quality at a lower price compared to Llama 3.3 70B. Maverick excels in image and text understanding with support for 12 languages, enabling the creation of sophisticated AI applications that bridge language barriers. Maverick is ideal for precise image understanding and creative writing, making it well-suited for general assistant and chat use cases. For developers, it offers state-of-the-art intelligence with high speed, optimized for best response quality and tone.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Targeted use cases include optimized chat scenarios that require high-quality responses. Meta fine-tuned Llama 4 Maverick to be an excellent conversational agent. It is the flagship chat model of the Meta Llama 4 family\u2014think of it as the multilingual, multimodal counterpart to a ChatGPT-like assistant. <\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It\u2019s particularly well-suited for interactive applications: <\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Customer support bots that need to understand images users upload.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">AI creative partners that can discuss and generate content in various languages.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Internal enterprise assistants that can help employees by answering questions and handling rich media input. <\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">With Maverick, enterprises can build high-quality AI assistants that converse naturally (and politely) with a global user base and leverage visual context when needed.<\/p>\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-1024x751.webp\" alt=\"Diagram of mixture of experts (MoE) architecture provided by Meta\" class=\"wp-image-39572 webp-format\" srcset=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-1024x751.webp 1024w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-300x220.webp 300w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-768x563.webp 768w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-1536x1126.webp 1536w, https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-2048x1501.webp 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" data-orig-src=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2025\/04\/Llama-Diagram-1-1024x751.webp\"><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"architectural-innovations-in-llama-4-multimodal-early-fusion-and-moe\">Architectural innovations in Llama 4: Multimodal early-fusion and MoE<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">According to Meta, two key innovations set Llama 4 apart: native multimodal support with early fusion and a sparse Mixture of Experts (MoE) design for efficiency and scale.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>Early-fusion multimodal transformer<\/strong>: Llama 4 uses an early fusion approach, treating text, images, and video frames as a single sequence of tokens from the start. This enables the model to understand and generate various media together. It excels at tasks involving multiple modalities, such as analyzing documents with diagrams or answering questions about a video&#8217;s transcript and visuals. For enterprises, this allows AI assistants to process full reports (text + graphics + video snippets) and provide integrated summaries or answers.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Cutting-edge Mixture of Experts (MoE) architecture<\/strong>: To achieve good performance without incurring prohibitive computing expenses, Llama 4 utilizes a sparse Mixture of Experts (MoE) architecture. Essentially, this means that the model comprises numerous expert sub-models, referred to as &#8220;experts,&#8221; with only a small subset active for any given input token. This design not only enhances training efficiency but also improves inference scalability. Consequently, the model can handle more queries simultaneously by distributing the computational load across various experts, enabling deployment in production environments without necessitating large single-instance GPUs. The MoE architecture allows Llama 4 to expand its capacity without escalating costs, offering a significant advantage for enterprise implementations.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"commitment-to-safety-and-best-practices\">Commitment to safety and best practices<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Meta built Llama 4 with the best practices outlined in their <a href=\"https:\/\/ai.meta.com\/static-resource\/july-responsible-use-guide\">Developer Use Guide: AI Protections<\/a>. This includes integrating mitigations at each layer of model development from pre-training to post-training and tunable system-level mitigations that shield developers from adversarial attacks. And, by making these models available in Azure AI Foundry, they come with proven safety and security guardrails developers come to expect from Azure.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">We empower developers to create helpful, safe, and adaptable experiences for their Llama-supported applications. Explore the Llama 4 models now in the <a href=\"https:\/\/ai.azure.com\/explore\/models?selectedCollection=meta\">Azure AI Foundry Model Catalog<\/a> and in <a href=\"https:\/\/aka.ms\/Llama-4ADB\">Azure Databricks<\/a> and start building with the latest in multimodal, MoE-powered AI\u2014backed by Meta\u2019s research and Azure\u2019s platform strength. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"empowering-innovation-with-meta-llama-4-on-azure\">Empowering innovation with Meta Llama 4 on Azure<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">The availability of Meta Llama 4 on <a href=\"https:\/\/azure.microsoft.com\/products\/ai-foundry\/\">Azure AI Foundry<\/a> and through <a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/databricks\">Azure Databricks<\/a> offers customers unparalleled flexibility in choosing the platform that best suits their needs. This seamless integration allows users to harness advanced AI capabilities, enhancing their applications with powerful, secure, and adaptable solutions. We are excited to see what you build next.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development.<\/p>\n","protected":false},"author":47,"featured_media":37163,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ms_queue_id":[],"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","_alt_title":"","footnotes":"","msx_community_cta_settings":[]},"categories":[1454,1474],"tags":[3165],"audience":[3072,3055],"content-type":[1465],"product":[1803,1544,3164],"tech-community":[],"topic":[],"coauthors":[3071],"class_list":["post-39569","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-machine-learning","category-analytics","tag-language-models","audience-ai-professionals","audience-developers","content-type-announcements","product-azure-ai","product-azure-databricks","product-microsoft-foundry","review-flag-1680286581-364","review-flag-3-1680286581-173","review-flag-4-1680286581-250","review-flag-integ-1680286579-214","review-flag-lever-1680286579-649","review-flag-new-1680286579-546"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks | Microsoft Azure Blog<\/title>\n<meta name=\"description\" content=\"We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development. We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks | Microsoft Azure Blog\" \/>\n<meta property=\"og:description\" content=\"We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development. We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\" \/>\n<meta property=\"og:site_name\" content=\"Microsoft Azure Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/microsoftazure\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-06T02:51:26+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-07T14:34:11+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1260\" \/>\n\t<meta property=\"og:image:height\" content=\"708\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Asha Sharma\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@azure\" \/>\n<meta name=\"twitter:site\" content=\"@azure\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Asha Sharma\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\"},\"author\":[{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/asha-sharma\/\",\"@type\":\"Person\",\"@name\":\"Asha Sharma\"}],\"headline\":\"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks\",\"datePublished\":\"2025-04-06T02:51:26+00:00\",\"dateModified\":\"2025-04-07T14:34:11+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\"},\"wordCount\":1042,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp\",\"keywords\":[\"Language models\"],\"articleSection\":[\"AI + machine learning\",\"Analytics\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\",\"name\":\"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks | Microsoft Azure Blog\",\"isPartOf\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp\",\"datePublished\":\"2025-04-06T02:51:26+00:00\",\"dateModified\":\"2025-04-07T14:34:11+00:00\",\"description\":\"We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development. We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks.\",\"breadcrumb\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp\",\"width\":1260,\"height\":708,\"caption\":\"background pattern\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog home\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI + machine learning\",\"item\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#website\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"name\":\"Microsoft Azure Blog\",\"description\":\"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.\",\"publisher\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization\",\"name\":\"Microsoft Azure Blog\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"contentUrl\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp\",\"width\":512,\"height\":512,\"caption\":\"Microsoft Azure Blog\"},\"image\":{\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/microsoftazure\",\"https:\/\/x.com\/azure\",\"https:\/\/www.instagram.com\/microsoftdeveloper\/\",\"https:\/\/www.linkedin.com\/company\/16188386\",\"https:\/\/www.youtube.com\/user\/windowsazure\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/cf8b8963c6771b07061a958ebfcff34d\",\"name\":\"Teri Seals-Dormer\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/4f1c6b1df49619573e006bda75a18efb7f99db184762acc79d899b8a6ef768aa?s=96&d=mm&r=gcf75c6bdc56c143a16794dc3648a2ad5\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4f1c6b1df49619573e006bda75a18efb7f99db184762acc79d899b8a6ef768aa?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4f1c6b1df49619573e006bda75a18efb7f99db184762acc79d899b8a6ef768aa?s=96&d=mm&r=g\",\"caption\":\"Teri Seals-Dormer\"},\"url\":\"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/terisealsdormer\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks | Microsoft Azure Blog","description":"We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development. We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/","og_locale":"en_US","og_type":"article","og_title":"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks | Microsoft Azure Blog","og_description":"We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development. We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks.","og_url":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/","og_site_name":"Microsoft Azure Blog","article_publisher":"https:\/\/www.facebook.com\/microsoftazure","article_published_time":"2025-04-06T02:51:26+00:00","article_modified_time":"2025-04-07T14:34:11+00:00","og_image":[{"width":1260,"height":708,"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.png","type":"image\/png"}],"author":"Asha Sharma","twitter_card":"summary_large_image","twitter_creator":"@azure","twitter_site":"@azure","twitter_misc":{"Written by":"Asha Sharma","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#article","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/"},"author":[{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/asha-sharma\/","@type":"Person","@name":"Asha Sharma"}],"headline":"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks","datePublished":"2025-04-06T02:51:26+00:00","dateModified":"2025-04-07T14:34:11+00:00","mainEntityOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/"},"wordCount":1042,"commentCount":0,"publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp","keywords":["Language models"],"articleSection":["AI + machine learning","Analytics"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/","name":"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks | Microsoft Azure Blog","isPartOf":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage"},"thumbnailUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp","datePublished":"2025-04-06T02:51:26+00:00","dateModified":"2025-04-07T14:34:11+00:00","description":"We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks, which enables people to build more personalized multimodal experiences. These models from Meta are designed to seamlessly integrate text and vision tokens into a unified model backbone. This innovative approach allows developers to leverage Llama 4 models in applications that demand vast amounts of unlabeled text, image, and video data, setting a new precedent in AI development. We are excited to share the first models in the Llama 4 herd are available today in Azure AI Foundry and Azure Databricks.","breadcrumb":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#primaryimage","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/10\/Azure_Hero_Hexagon_Blue_MagentaGrad_Cropped.webp","width":1260,"height":708,"caption":"background pattern"},{"@type":"BreadcrumbList","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-the-llama-4-herd-in-azure-ai-foundry-and-azure-databricks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog home","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/"},{"@type":"ListItem","position":2,"name":"AI + machine learning","item":"https:\/\/azure.microsoft.com\/en-us\/blog\/category\/ai-machine-learning\/"},{"@type":"ListItem","position":3,"name":"Introducing the Llama 4 herd in Azure AI Foundry and Azure Databricks"}]},{"@type":"WebSite","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#website","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","name":"Microsoft Azure Blog","description":"Get the latest Azure news, updates, and announcements from the Azure blog. From product updates to hot topics, hear from the Azure experts.","publisher":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/azure.microsoft.com\/en-us\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#organization","name":"Microsoft Azure Blog","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","contentUrl":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-content\/uploads\/2024\/06\/microsoft_logo.webp","width":512,"height":512,"caption":"Microsoft Azure Blog"},"image":{"@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/microsoftazure","https:\/\/x.com\/azure","https:\/\/www.instagram.com\/microsoftdeveloper\/","https:\/\/www.linkedin.com\/company\/16188386","https:\/\/www.youtube.com\/user\/windowsazure"]},{"@type":"Person","@id":"https:\/\/azure.microsoft.com\/en-us\/blog\/#\/schema\/person\/cf8b8963c6771b07061a958ebfcff34d","name":"Teri Seals-Dormer","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/4f1c6b1df49619573e006bda75a18efb7f99db184762acc79d899b8a6ef768aa?s=96&d=mm&r=gcf75c6bdc56c143a16794dc3648a2ad5","url":"https:\/\/secure.gravatar.com\/avatar\/4f1c6b1df49619573e006bda75a18efb7f99db184762acc79d899b8a6ef768aa?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4f1c6b1df49619573e006bda75a18efb7f99db184762acc79d899b8a6ef768aa?s=96&d=mm&r=g","caption":"Teri Seals-Dormer"},"url":"https:\/\/azure.microsoft.com\/en-us\/blog\/author\/terisealsdormer\/"}]}},"msxcm_display_generated_audio":false,"msxcm_animated_featured_image":null,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Microsoft Azure Blog","distributor_original_site_url":"https:\/\/azure.microsoft.com\/en-us\/blog","push-errors":false,"_links":{"self":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/39569","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/users\/47"}],"replies":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/comments?post=39569"}],"version-history":[{"count":28,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/39569\/revisions"}],"predecessor-version":[{"id":39602,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/posts\/39569\/revisions\/39602"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media\/37163"}],"wp:attachment":[{"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/media?parent=39569"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/categories?post=39569"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tags?post=39569"},{"taxonomy":"audience","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/audience?post=39569"},{"taxonomy":"content-type","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/content-type?post=39569"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/product?post=39569"},{"taxonomy":"tech-community","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/tech-community?post=39569"},{"taxonomy":"topic","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/topic?post=39569"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/azure.microsoft.com\/en-us\/blog\/wp-json\/wp\/v2\/coauthors?post=39569"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}