Microsoft Cognitive Toolkit version 2.0
Fecha de actualización: miércoles, 14 de junio de 2017
The Microsoft Cognitive Toolkit (CNTK) version 2.0 will be in full release and available to the public. Cognitive Toolkit will enable enterprise-ready, production-grade AI by allowing users to create, train, and evaluate their own neural networks that can then scale efficiently across multiple GPUs and multiple machines on massive data sets. The 2.0 version of the toolkit started in preview on October 2016, went to release candidate April 3, and is now available for production workloads.
The open-source toolkit can be found on GitHub. Hundreds of new features, performance improvements, and fixes have been added since the preview was introduced. As a part of this general availability release, we're excited to highlight three new features below:
Keras Support. The Keras API was designed for users to develop AI applications and will be optimized for the user experience. It will offer consistent and simple APIs, minimize the number of user actions required for common use cases, and provide clear and actionable feedback upon user error. Keras users will now benefit from the performance of Cognitive Toolkit without any changes to their existing Keras recipes.
Java Bindings and Spark Support. Traditionally, Cognitive Toolkit models are evaluated in either Python, BrainScript, or C#. Now, users will be able to evaluate Cognitive toolkit models with a new Java API. This will make it ideal for users wishing to integrate deep learning models into their Java based applications, or for evaluation at scale on platforms like Spark.
Model Compression. Evaluating a trained model on lower end CPUs found in mobile products can make real-time performance difficult to achieve. This is especially true when attempting to evaluate models trained for image learning on real-time video coming from a camera. With the Cognitive Toolkit full release, we're including extensions that will allow quantized implementations operations that are several times faster than full precision counterparts. You'll be able to evaluate Cognitive Toolkit models much faster on server and low-power embedded devices with little loss of evaluation accuracy.