Kubernetes AI toolchain operator
게시된 날짜: 11월 15, 2023
You can now run specialized machine learning workloads like large language models (LLMs) on Azure Kubernetes Service (AKS) more cost-effectively and with less manual configuration.
The initial release of Kubernetes AI toolchain operator, an open source project, automates LLM model deployment on AKS across available CPU and GPU resources by selecting optimally sized infrastructure for the model. It makes it possible to easily split inferencing across multiple lower-GPU count VMs, increasing the number of Azure regions where workloads can run, eliminating wait times for higher GPU-count VMs, and lowering overall cost. You can also choose from preset models with images hosted by AKS, significantly reducing overall inference service setup time.