Skip to main content
Azure
  • 3 min read

TensorFlow 2.0 on Azure: Fine-tuning BERT for question tagging

Congratulations to the TensorFlow community on the release of TensorFlow 2.0! In this blog, we aim to highlight some of the ways that Azure can streamline the building, training, and deployment of your TensorFlow model.

This post is co-authored by Abe Omorogbe, Program Manager, Azure Machine Learning, and John Wu, Program Manager, Azure Machine Learning

Congratulations to the TensorFlow community on the release of TensorFlow 2.0! In this blog, we aim to highlight some of the ways that Azure can streamline the building, training, and deployment of your TensorFlow model. In addition to reading this blog, check out the demo discussed in more detail below, showing how you can use TensorFlow 2.0 in Azure to fine-tune a BERT (Bidirectional Encoder Representations from Transformers) model for automatically tagging questions.

TensorFlow 1.x is a powerful framework that enables practitioners to build and run deep learning models at massive scale. TensorFlow 2.0 builds on the capabilities of TensorFlow 1.x by integrating more tightly with Keras (a library for building neural networks), enabling eager mode by default, and implementing a streamlined API surface.

TensorFlow 2.0 on Azure

We've integrated Tensorflow 2.0 with the Azure Machine Learning service to make bringing your TensorFlow workloads into Azure as seamless as possible. Azure Machine Learning service provides an SDK that lets you write machine learning models in your preferred framework and run them on the compute target of your choice, including a single virtual machine (VM) in Azure, a GPU (graphics processing unit) cluster in Azure, or your local machine. The Azure Machine Learning SDK for Python has a dedicated TensorFlow estimator that makes it easy to run TensorFlow training scripts on any compute target you choose.

In addition, the Azure Machine Learning service Notebook VM comes with TensorFlow 2.0 pre-installed, making it easy to run Jupyter notebooks that use TensorFlow 2.0.

TensorFlow 2.0 on Azure demo: Automated labeling of questions with TF 2.0, Azure, and BERT

As we’ve mentioned, TensorFlow 2.0 makes it easy to get started building deep learning models. Using TensorFlow 2.0 on Azure makes it easy to get the performance benefits of Microsoft’s global, enterprise-grade cloud for whatever your application may be.

To highlight the end-to-end use of TensorFlow 2.0 in Azure, we prepared a workshop that will be delivered at TensorFlow World, on using TensorFlow 2.0 to train a BERT model to suggest tags for questions that are asked online. Check out the full GitHub repository, or go through the higher-level overview below.

Demo Goal

In keeping with Microsoft’s emphasis on customer obsession, Azure engineering teams try to help answer user questions on online forums. Azure teams can only answer questions if we know that they exist, and one of the ways we are alerted to new questions is by watching for user-applied tags. Users might not always know the best tag to apply to a given question, so it would be helpful to have an AI agent to automatically suggest good tags for new questions.

We aim to train an AI agent to automatically tag new Azure-related questions.

Training

First, check out the training notebook. After preparing our data in Azure Databricks, we train a Keras model on an Azure GPU cluster using the Azure Machine Learning service TensorFlow Estimator class. Notice how easy it is to integrate Keras, TensorFlow, and Azure’s compute infrastructure. We can easily monitor the progress of training with the run object.

Inferencing

Next, open up the inferencing notebook. Azure makes it simple to deploy your trained TensorFlow 2.0 model as a REST endpoint in order to get tags associated with new questions.

Machine Learning Operations

Next, open up the Machine Learning Operations instructions. If we intend to use the model in a production setting, we can bring additional robustness to the pipeline with ML Ops, an offering by Microsoft that brings a DevOps mindset to machine learning, enabling multiple data scientists to work on the same model while ensuring that only models that meet certain criteria will be put into production.

Next steps

TensorFlow 2.0 opens up exciting new horizons for practitioners of deep learning, both old and new. If you would like to get started, check out the following resources: