Users getting Cognitive Toolkit 2.0 for the first time will see several major improvements and myriad smaller additions. Among the big changes to the platform is the ability to use Python and C++. There is also a new developer AI method called reinforcement learning. Enhancing the performance of the service was important for Microsoft. Version 2.0 is now faster than any rival, according to the company, especially when handling big data sets and multiple machines. When it was launched in beta, Microsoft also announced a collaboration with NVIDIA: “Microsoft Cognitive Toolkit represents tight collaboration between Microsoft and NVIDIA to bring advances to the deep learning community,” said Ian Buck, general manager, Accelerated Computing Group, NVIDIA. “Compared to the previous version, it delivers almost two times performance boost in scaling to eight Pascal GPUs in an NVIDIA DGX-.”
The performance increase are on a mass scale and allow artificial intelligence to be implemented in consumer products. “One key reason to use Microsoft Cognitive Toolkit is its ability to scale efficiently across multiple GPUs and multiple machines on massive data sets,” said Chris Basoglu, partner engineering manager at Microsoft.
Cognitive Toolkit 2.0 Changes
Microsoft says Cognitive Toolkit has been popular and is already used by global companies. The company has already implemented it into its own services and products. Version 2.0 is now available on GitHub via an open source license. While the new release comes with many changes, here are the main improvements:
The ability to extend Cognitive Toolkit functions, learners, trainers and optimizers with your own algorithms in Python, C++. Enhanced, built-in distributed readers for speech, image, and text deep learning tasks. The ability to use TensorBoard visualizations from Cognitive toolkit! Read more here. Pretrained models available for use. Performance improvements. Support of distributed scenarios in Python API. See more in the sections on distributed scenarios in the ConvNet and ResNet examples. Support of Asynchronous Stochastic Gradient Descent (ASGD)/Hogwild! training parallelization support using Microsoft’s Parameter Server (Project Multiverso). Support for training on one-hot and sparse arrays via NumPy. Support of object recognition using Fast R-CNN algorithm.