Ta en kikk på hva som skjer i nærheten av deg
Dette arrangementet er avsluttet
Train a distributed convolutional neural network using Microsoft Cognitive Toolkit (CNTK) and Batch AI
Learn how we use Microsoft’s Cognitive Toolkit, also known as CNTK, to train a convolutional neural network over multiple nodes and multiple GPUs.
Increasingly, computer vision applications are starting to apply deep learning technologies, and many of them are achieving great success. Nevertheless, training deep learning networks on a large data set remains challenging. The amount of computation needed, such as training a convolutional neural network on the Sport 1M data set, can take months. Combining that with the art of hyper-parameter tuning, the community needs tools to help train deep learning networks on multiple servers with multiple GPUs. In this session, we will show how we use Microsoft’s Cognitive Toolkit, also known as CNTK, to train a convolutional neural network over multiple nodes and multiple GPUs. CNTK has unique advantages in speed and scalability. In the tutorial, we’ll show that CNTK achieves almost linear scalability through advanced algorithms such as 1-bit SGD and block-momentum SGD. These algorithms will be explained in detail in this session.
Tid: Tue, 24 Oct 2017 17:00:00 GMT
Tue, 24 Oct 2017 17:00:00 GMT (UTC)