Kirill Gavrylyuk joins Scott Hanselman to show how to run Jupyter Notebook and Apache Spark in Azure Cosmos DB. Now you can use the interactive experience of Jupyter Notebook and analytics powered by Apache Spark with your operational data. Run analytics and ML on your operational data in real time without data movement, and without the need to split into transactional and analytical silos.[00:02:18] Jupyter Notebook demo[00:05:41] Jupyter Notebook + Apache Spark demoAzure Cosmos DB overviewAzure Cosmos DB pricingAzure Cosmos DB docsCreate a free account (Azure)
Azure Cosmos DB keeps getting better with a preview of notebook support, we've got Apache Spark, great new optimizations in queries, and idiomatic SDKs now on v3. Kirill Gavrylyuk is back once again with all the updates.Azure Cosmos DB overviewAzure Cosmos DB pricingAzure Cosmos DB docsCreate a free account (Azure)
Deborah Chen joins Scott Hanselman to share some best practices on how to debug and optimize Azure Cosmos DB for better performance. Watch as they go through the common issues newcomers to Azure Cosmos DB run into with respect to performance and how to solve them by tuning Request Unit (RU) cost and choosing a good partition key.Azure Cosmos DB - Globally distributed, multi-model database service for any scaleOptimize provisioned throughput cost in Azure Cosmos DBOptimize query cost in Azure Cosmos DBPartitioning in Azure Cosmos DBRequest Units in Azure Cosmos DBCreate a free account (Azure)
Matias Quaranta joins Scott Hanselman to share some best practices for creating serverless geo-distributed applications with Azure Cosmos DB. With the native integration between Azure Cosmos DB and Azure Functions, you can create database triggers, input bindings, and output bindings directly from your Azure Cosmos DB account. Using Azure Functions and Azure Cosmos DB, you can create and deploy event-driven serverless apps with low-latency access to rich data for a global user base.Serverless database computing using Azure Cosmos DB and Azure FunctionsServerless geo-replicated event-based architecture sample for Azure Friday (GitHub)Change feed in Azure Cosmos DB - overviewCreate a free account (Azure)
Spark is the world’s foremost distributed analytics platform, delivering in-memory analytics with a speed and ease of use unheard of in Hadoop. Azure Cosmos DB is the lighting fast distributed database powering Fortune 500 companies like Walmart, Exxon Mobile, Toyota and many others. Did you know you can combine them easily using our natively built azure-cosmosdb-spark connector or now you can use the new Spark API feature integration that allows Spark to fully take advantage of Cosmos DB to run real-time analytics directly on petabytes of operational data! In this session we’ll go over some of the most common use cases of the azure-cosmosdb-spark connector and highlight how to avoid the most common pitfalls. We will talk about the new Azure Cosmos DB Spark API and the native support it brings for Apache Spark engines executing directly on petabytes of operational data stored in your globally distributed Cosmos databases. We will walk through the capabilities Spark API brings to developers, data engineers and data scientists such that they can use Cosmos DB as a flexible, scalable, and performant planet-scale data platform for running both OLTP and HTAP workloads alike.
In this session, we will discuss how to build mission-critical multi-tenant systems on Azure Cosmos DB that scale out to handle a global user base. We’ll begin by covering key concepts surrounding performance isolation, security isolation, high availability, disaster recovery, and SLAs. And then we’ll dive deep by walking through the Cosmos DB design considerations and capabilities – including partitioning strategies and multi-master.
For many newcomers to Azure Cosmos DB, the learning process starts with data modeling and partitioning. How should I structure my data? When should I co-locate data in a single container? Should I de-normalize or normalize properties? What’s the best partition key for my model? In this demo-filled session, we discuss the strategies and thought process one should adopt for modeling and partitioning data effectively in Azure Cosmos DB. Using a real-world example, we explore Cosmos DB key concepts – request units (RU), partitioning, and data modeling – and how their understanding guides the path to a data model that yields best performance and scalability.
As developers push intellectual property to registries, how will you secure and protect that IP? Additionally, how do you ensure that once those applications are deployed they properly running and in good shape? In this session we'll cover building container images, image scanning, signing and promotion across environments. Then we will look at the tools and knowledge you need to keep your containerized applications healthy and how to detect when something goes wrong.
Make More of the Sky - Air Traffic Management (ATM) services affect the quality and performance of every commercial flight in the world - currently transporting about a billion passengers a year. Growing traffic, limited systems capacity, strict safety requirements, and high environmental awareness demand a constant ATM services improvement. Come and listen to Heiko Udluft, AirSense Technical Leader, Jesse Anderson, BigData guru, and Vincent Chartier, Customer Success CTO at Microsoft France, tell how they use Azure to continuously develop, deploy and release AirSense services, the Airbus new generation ATM services. These services, powered by Azure Data & Analytics platform, provide business insights based on real-time aircraft position data to Airbus internal and external customers in order to shape the future of ATM. Find out how Azure Event Hubs, Azure Cosmos DB, and Azure Databricks, amongst others, were combined to handle real-time streams, derive analytics, and provide decision support directly relevant and valuable for the Air Transportation industry. Heiko, Jesse and Vincent will also cover the collaborative approach they leverage to overcome the challenges encountered along the way.
This session will talk about how various Azure technologies like Azure Event Hubs, Azure Cosmos DB and Node.js can be used to process telematics data. We'll do a code demo of how sample telematics data are ingested into Event Hubs and processed by multiple processes and stored in Azure Cosmos DB.
Get $200 in Azure credits and 12 months of popular services—freeStart free
Subscribers get up to $1800 per year of Azure servicesActivate now
Join Microsoft for Startups and get free Azure servicesLearn more