• 3 min read

Microsoft and Intel donate Azure Hardware, AI Services to Advance Intelligent Edge Research at Carnegie Mellon University

To further advance inquiry and discovery at the edge, today we are announcing Microsoft is donating cloud hardware and services to Carnegie Mellon University’s Living Edge Laboratory.

Edge computing is one of the greatest trends that is not only transforming the cloud market, but creating new opportunities for business and society. At its core, edge computing is about harnessing compute on a device, closest to where insights need to be realized – on a connected car, a piece of machinery, or a remote oil field, for example – so these insights come without delay and with a high-degree of accuracy. This is known as the “intelligent edge” because these devices are constantly learning, often aided by AI and Machine Learning algorithms powered by the intelligent cloud. At Microsoft we see amazing new applications of the intelligent edge and intelligent cloud every day – and yet this opportunity is so expansive – the surface has barely been scratched.

To further advance inquiry and discovery at the edge, today we are announcing Microsoft is donating cloud hardware and services to Carnegie Mellon University’s Living Edge Laboratory. Carnegie Mellon University is recognized as one of the leading global research institutions. Earlier this year, the university announced a $27.5 million semiconductor research initiative to connect edge devices to the cloud. Today’s announcement builds on existing commitments to discovery in the field of edge computing.

The Living Edge Laboratory is a testbed for exploring applications that generate large data volumes and require intense processing with near-instantaneous response times. The lab is designed to open accessibility to the latest innovations in edge computing and advance discovery for edge computing applications across industries. Microsoft will donate an Azure Data Box Edge, Azure Stack partnering with Intel, and Azure credits to do advanced machine learning and AI at the edge.

This new paradigm of cloud computing requires consistency in how an application is developed, to run in the cloud and at the edge. We are building Azure to make this possible. From app platform to AI, security and management, our customers can architect, develop, and run a distributed application as a single, consistent environment.

We offer the most comprehensive portfolio to enable computing at the edge, covering the full spectrum IoT devices and sensors to bringing the full power of the cloud to the edge with Azure Stack. With this spectrum spanning hardware and software, with security and advanced analytics and AI services, Microsoft brings a robust platform for developers to research and create new applications for the edge.

The Living Edge Lab was established over the past year through the Open Edge Computing Initiative, a collective effort dedicated to driving the business opportunities and technologies surrounding edge computing. With the addition of Microsoft products to the lab, faculty and students will be able to use them to develop new applications and compare their performance with other components already in place in the lab. As part of this donation, Microsoft is also joining the Open Edge Computing Initiative.

Students of Carnegie Mellon are already making exciting discoveries and applications powered by Azure AI and ML services at the edge. One of these applications is designed to help visually impaired people to detect objects or people nearby. The video feeds of a stereoscopic camera on a user are transmitted to a nearby cloudlet, and real-time video analytics is used detect obstacles. This information is transmitted back to the user and communicated via vibro-tactile feedback.

Another, OpenRTiST, allows a user to see the world around them in real time, “through the eyes of an artist.” The video feed from the camera of a mobile device is transmitted to a local application, transformed there by a deep neural network trained offline to learn the artistic features of a famous painting, and returned to the user’s device as a video feed. The entire round trip is fast enough to preserve the illusion that the world around the user as displayed on the device is being continuously repainted by the artist.

This is the beginning of an exciting new chapter of research at Carnegie Mellon – stemming from a collaboration on edge computing that began 10 years ago –  and we cannot wait to see what new discoveries and scenarios come to life from the Living Edge Lab.