Skip to main content
Azure
  • 4 min read

Considering Azure Functions for a serverless data streaming scenario

We recently published a blog on a fraud detection solution delivered to a banking customer. The solution required complete processing of a streaming pipeline for telemetry data in real-time using a serverless architecture. This blog describes the evaluation process and the decision to use Microsoft Azure Functions.

In the blog post “A fast, serverless, big data pipeline powered by a single Azure Function” we discussed a fraud detection solution delivered to a banking customer. This solution required complete processing of a streaming pipeline for telemetry data in real-time using a serverless architecture. This blog post describes the evaluation process and the decision to use Microsoft Azure Functions.

Scenario

A large bank wanted to build a solution to detect fraudulent transactions submitted through its mobile banking channel. The solution is built on a common big data pipeline pattern where high volumes of real-time data are ingested into a cloud service and a series of data transformations and extraction activities occur. This results in the creation of a feature matrix and the use of advanced analytics. For the bank, the pipeline had to be very fast and scalable allowing end-to-end evaluation of each transaction to finish in fewer than two seconds.

Pipeline requirements include the following:

  • Scalable and responsive to extreme bursts of ingested event activity. Up to 4 million events and 8 million plus transactions daily.
  • Events were ingested as complex JSON files, each containing from two to five individual bank transactions. Each JSON file had to be parsed and individual transactions extracted, processed, and evaluated for fraud.
  • Events and transactions had to be processed in order with assurance that duplicates would not be processed. The reason for this requirement is that behavioral data was extracted from each transaction to create customer and account profiles. If events were not processed sequentially, the calculations and aggregations used to create the profiles and feature set for fraud prediction would be invalid and impact the accuracy of the machine learning model.
  • Reference data and the ability to do dynamic look-ups was a critical component in the pipeline processing. For this scenario, reference data could be updated at any point during the day.
  • An architecture that could be deployed with ARM templates, making integration with CI/CD and DevOps processes easier. A template architecture meant the entire fraud detection pipeline architecture could be easily redeployed to facilitate testing or to quickly extend the bank’s fraud detection capabilities to additional banking channels such as Internet banking.

To meet these requirements, we evaluated Azure Functions because of its suitability for real-time, big data streaming, and the following capabilities:

  • Easy configuration and setup
  • Designed to handle real-time, large-scale event processing
  • Out-of-the-box integration with Event Hubs, Azure SQL Database (SQL Database), Azure Machine Learning, and other managed services

How did we do it?

Exploring the technology helped to determine if it was a fit for this specific situation. Two aspects to the solution required deep validation. How long does it take to process a single message end-to-end? And how many concurrent messages can be processed end-to-end?

This workflow begins with data streaming into a single instance of Event Hubs, which is then consumed by a single Azure Function as shown below.

Azure Function and Event Hub data streaming workflow chart

The test harness was driven using a message re-player to send events to an event hub. With modifications to use data specific for this scenario, we used TelcoGenerator, which is a call-event generation app downloadable from Microsoft. The source code is available on GitHub.

What did we learn?

Azure Functions is easy to configure and within minutes can be set up to consume massive volumes of telemetry data from Azure Event Hubs. Load testing was performed and telemetry was captured through Azure Application Insights. Key metrics clearly showed that Azure Functions delivered the required performance and throughput for this particular workflow:

Architecture 1 Event Hub + 1 Azure Function
Minimum time to process a single message end-to end (lower is better) 69 milliseconds
Average number of events processed per minute (higher is better) 8,300

Table: Load testing results

In addition to load testing performance, other features that helped drive the selection of Azure Functions included the following:

  • The ability to evaluate and execute one event at a time with millisecond latency and assurance that events are processed in order.
  • Azure Functions supports C#, JavaScript, Python, and other languages, which allows for the application of complex business logic within pipeline processing.
  • An Azure Function integrated with SQL Database provides multiple benefits:
      • Transactional control over event and transaction processing. If processing errors were found with a transaction in an event, all the transactions contained in the event could be rolled back.
      • Doing the bulk of data preparation using SQL Database in-memory processing and native stored procedures was very fast.
  • Dynamic changes to reference data or business logic are easily accommodated with Azure Functions. Reference tables and stored procedures could be updated easily and quickly in SQL Database and used immediately in subsequent executions without requiring the pipeline’s redeployment.
  • JSON file processing was intensive and complex with up to five individual bank transactions extracted from each JSON file. With Azure Functions, JSON parsing was fast because it could leverage native JSON capabilities in .NET Framework.
  • Initially the service had an issue with locking on the Event Hubs consumer groups when scaling. After experimenting with configuration parameters described in “A fast, serverless, big data pipeline powered by a single Azure Function”, a single function was all that was needed to meet the big data volume requirements for this solution.
  • With an Azure Function, state can be saved easily between processing activities.
  • For a DevOps release process, it is straight forward to incorporate an Azure Functions pipeline and develop unit tests for Azure Functions methods. For this scenario, it was helpful to develop unit tests for data values and data type checks for the Azure Function.

Recommended next steps

As solution requirements are refined, it can become important for technology decision makers to know how a data pipeline will perform with sudden fluctuations in event and data volumes. As you consider technologies for your data pipeline solution, consider load testing the pipeline. To get you started, here are some links that can help:

Special thank you to Cedric Labuschagne, Chris Cook, and Eujon Sellers for their collaboration on this blog.