Imagine you’re driving down the road. As long as the road is straight, you can see everything in front of you. But what happens when the road curves? Your brain makes assumptions based on past experiences to fill in what can’t be seen. For autonomous vehicles, the challenge is to give them the ability to make the same assumptions.
Our customer Elektronische Fahrwerksysteme GmbH (EFS) is working on that problem for a major auto manufacturer. We recently published a case study that describes how EFS uses NVIDIA GPUs in Microsoft Azure to analyze 2D images.
EFS had never applied deep learning to this kind of image processing before. Using Azure allowed them to quickly create a proof of concept environment. This allowed them to verify their algorithms and show value without having to make large upfront investments in time and capital.
“The innovative ideas we’ve implemented so far give us trust in a new deep learning architecture and in solutions that will rely on it,” EFS software developer Max Jesch said. “We proved that it’s possible to use deep learning to analyze roads. That is a really big deal. As far as we know, EFS is the first company to do it on such a large scale.”
How large of a scale? EFS had several terabytes of data in Azure Blob storage. The NC-series virtual machines, powered by NVIDIA’s Tesla P100 GPU, made quick work of that data. And the elasticity of Azure gave EFS the flexibility they needed to get their work done effectively.