• 3 min read

Azure Media Services Motion Detector adds detection zones, sensitivity levels and more

We released an update that adds a number of new features to the service, including improved accuracy, sensitivity adjustment, multiple polygonal detection zones and event merging.

Two months ago we announced the public preview of Motion Detection on Azure Media Services. Now we are releasing an update that adds a number of new features to the service, including improved accuracy, sensitivity adjustment, multiple polygonal detection zones and event merging. These changes make it easier to decrease false positives by limiting motion detection to a specific entryway, or by changing the sensitivity level to match your specific use case.

This makes it even easier to supplement security cameras which already have a naïve motion detector built in with a second pass in the cloud, which filters out the real motion out of a myriad of false alarms.

New features

Polygonal detection zones

In the previous version of motion detection, events were detected for the entire screen. This can be undesirable, for example, in a busy store where alerts are only needed for motion at the front entrance or cash registers. The new polygonal zones will narrow down the locations where motion is reported, to only the user specified polygons. These zones can be created with three or more points, and should be simple polygons.


Sensitivity levels

There are three sensitivity levels to choose from: low, medium and high, where higher sensitivity means smaller movements are also reported. Sensitivity is useful in controlling the amount of false positives that get reported.

Merge thresholds

Event merge will combine multiple motion events that happen within a specified timespan of each other into a single event. For example, on a busy street this can be used to reduce the number of alerts that get fired by reporting closely spaced events as single longer events.


Motion location

In addition to reporting the region where motion was registered, the output file will also define the exact area where motion occurred using a rectangular bounding box. Check out the example below.

“locations”: [{
                “x”: 0.004184,
                “y”: 0.007463,
                “width”: 0.391667,
                “height”: 0.185185

Input configuration file

The input configuration has been expanded to report the features mentioned above. The empty preset from the previous version will use the default values below.

JSON sample:

  'Version': '1.0',
  'Options': {
    'sensitivityLevel': 'medium',
    'frameSamplingValue': 1,
    'detectLightChange': 'false',
    'detectionZones': [
        {'x': 0, 'y': 0},
        {'x': 0.5, 'y': 0},
        {'x': 0, 'y': 1}
        {'x': 0.3, 'y': 0.3},
        {'x': 0.55, 'y': 0.3},
        {'x': 0.8, 'y': 0.3},
        {'x': 0.8, 'y': 0.55},
        {'x': 0.8, 'y': 0.8},
        { 'x': 0.55, 'y': 0.8},
        {'x': 0.3, 'y': 0.8},
        { 'x': 0.3, 'y': 0.55}







‘low’, ‘medium’, ‘high’

Sets the sensitivity level at which motion is reported. Adjust this to balance the amount of false positives.



Positive Integer

Sets the frequency at which algorithm runs. 1 equals every frame, 2 means every 2nd frame, and so on.




‘true’, ‘false’

Chooses if light changes are reported in the results or not



xs-time: Hh:mm:ss

Example: 00:00:03

Specifies the time window between motion events where they will be combined into a single event.



A Detection Zone is 3 or more points

Point is a x and y coordinate from 0 to 1

A list of polygonal detection zones to be used.

The output json file will use ids starting from 0 to reference these zones.

Single zone of entire frame

Output JSON result

The Output JSON format has changed to reflect many of the new features.




Regions now lists the polygon regions instead of the old rectangles. The current type available is “polygon”. In the future may support more types such as circles.


New output version id is 2


This new entry under events lists the location where the motion occurred. This is more specific than the detection zones.

  “version”: 2,
  “timescale”: 23976,
  “offset”: 0,
  “framerate”: 24,
  “width”: 1280,
  “height”: 720,
  “regions”: [
      “id”: 0,
      “type”: “polygon”,
      “points”: [{'x': 0, 'y': 0},
        {'x': 0.5, 'y': 0},
        {'x': 0, 'y': 1}]
  “fragments”: [
      “start”: 0,
      “duration”: 226765
      “start”: 226765,
      “duration”: 47952,
      “interval”: 999,
      “events”: [
            “type”: 2,
            “typeName”: “motion”,
            “locations”: [
                “x”: 0.004184,
                “y”: 0.007463,
                “width”: 0.991667,
                “height”: 0.985185
            “regionId”: 0

For more detailed info check out the documentation page.


The Azure Media Services Explorer open source project also supports these new features as of


Contact us

Keep up with the Azure blog's Media Services posts to hear more updates on the Motion Detection Media Processor and the Media Analytics initiative.

Send your feature requests to UserVoice. For questions about any of the Media Analytics products, send an email to amsanalytics@microsoft.com.