Seattle Fire Department 911 dispatches.
Volume and Retention
This dataset is stored in Parquet format. It is updated daily, and contains about 800K rows (20MB) in total as of 2019.
This dataset contains historical records accumulated from 2010 to the present. You can use parameter settings in our SDK to fetch data within a specific time range.
Storage Location
This dataset is stored in the East US Azure region. Allocating compute resources in East US is recommended for affinity.
Additional Information
This dataset is sourced from city of Seattle government. Source link can be found here. Find Licensing and Attribution for the terms of using this dataset. Email open.data@seattle.gov if you have any questions about the data source.
Notices
MICROSOFT PROVIDES AZURE OPEN DATASETS ON AN “AS IS” BASIS. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, GUARANTEES OR CONDITIONS WITH RESPECT TO YOUR USE OF THE DATASETS. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAW, MICROSOFT DISCLAIMS ALL LIABILITY FOR ANY DAMAGES OR LOSSES, INCLUDING DIRECT, CONSEQUENTIAL, SPECIAL, INDIRECT, INCIDENTAL OR PUNITIVE, RESULTING FROM YOUR USE OF THE DATASETS.
This dataset is provided under the original terms that Microsoft received source data. The dataset may include data sourced from Microsoft.
Access
Available in | When to use |
---|---|
Azure Notebooks | Quickly explore the dataset with Jupyter notebooks hosted on Azure or your local machine. |
Azure Databricks | Use this when you need the scale of an Azure managed Spark cluster to process the dataset. |
Azure Synapse | Use this when you need the scale of an Azure managed Spark cluster to process the dataset. |
Preview
dataType | dataSubtype | dateTime | category | subcategory | status | address | latitude | longitude | source | extendedProperties |
---|---|---|---|---|---|---|---|---|---|---|
Safety | 911_Fire | 1/21/2021 7:37:00 AM | Aid Response | null | null | 111 Cedar St | 47.615877 | -122.35052 | null | |
Safety | 911_Fire | 1/21/2021 7:37:00 AM | MVI - Motor Vehicle Incident | null | null | Rainier Ave S / S Massachusetts St | 47.588411 | -122.305827 | null | |
Safety | 911_Fire | 1/21/2021 7:22:00 AM | Illegal Burn | null | null | 888 Western Av | 47.603448 | -122.336632 | null | |
Safety | 911_Fire | 1/21/2021 6:59:00 AM | Aid Response | null | null | 3256 Portage Bay Pl E | 47.651445 | -122.320417 | null | |
Safety | 911_Fire | 1/21/2021 6:58:00 AM | Aid Response | null | null | 4547 19th Av Ne | 47.661675 | -122.307239 | null | |
Safety | 911_Fire | 1/21/2021 6:49:00 AM | Triaged Incident | null | null | Lake City Way Ne / Ne Northgate Way | 47.71042 | -122.300472 | null | |
Safety | 911_Fire | 1/21/2021 6:44:00 AM | Aid Response | null | null | 11030 5th Av Ne | 47.709488 | -122.323301 | null | |
Safety | 911_Fire | 1/21/2021 6:11:00 AM | Aid Response | null | null | 607 3rd Av | 47.602813 | -122.331449 | null | |
Safety | 911_Fire | 1/21/2021 5:58:00 AM | Aid Response | null | null | 2309 20th Ave S | 47.58287 | -122.306868 | null | |
Safety | 911_Fire | 1/21/2021 5:48:00 AM | Triaged Incident | null | null | 3641 2nd Av S | 47.571212 | -122.332002 | null |
Name | Data type | Unique | Values (sample) | Description |
---|---|---|---|---|
address | string | 191,447 | 517 3rd Av 318 2nd Av Et S |
Location of Incident. |
category | string | 232 | Aid Response Medic Response |
Response Type. |
dataSubtype | string | 1 | 911_Fire | “911_Fire” |
dataType | string | 1 | Safety | “Safety” |
dateTime | timestamp | 1,508,279 | 2020-11-04 06:49:00 2019-06-19 11:51:00 |
The date and time of the call. |
latitude | double | 93,730 | 47.602172 47.600194 |
This is the latitude value. Lines of latitude are parallel to the equator. |
longitude | double | 79,096 | -122.330863 -122.330541 |
This is the longitude value. Lines of longitude run perpendicular to lines of latitude, and all pass through both poles. |
Azure Notebooks
# This is a package in preview.
from azureml.opendatasets import SeattleSafety
from datetime import datetime
from dateutil import parser
end_date = parser.parse('2016-01-01')
start_date = parser.parse('2015-05-01')
safety = SeattleSafety(start_date=start_date, end_date=end_date)
safety = safety.to_pandas_dataframe()
safety.info()
# Pip install packages
import os, sys
!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "citydatacontainer"
folder_name = "Safety/Release/city=Seattle"
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
if azure_storage_account_name is None or azure_storage_sas_token is None:
raise Exception(
"Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")
print('Looking for the first parquet under the folder ' +
folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
container_url, azure_storage_sas_token if azure_storage_sas_token else None)
container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
targetBlobName = blob.name
break
print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
blob_client.download_blob().download_to_stream(local_file)
# Read the parquet file into Pandas data frame
import pandas as pd
print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
Azure Databricks
# This is a package in preview.
# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import SeattleSafety
from datetime import datetime
from dateutil import parser
end_date = parser.parse('2016-01-01')
start_date = parser.parse('2015-05-01')
safety = SeattleSafety(start_date=start_date, end_date=end_date)
safety = safety.to_spark_dataframe()
display(safety.limit(5))
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "citydatacontainer"
blob_relative_path = "Safety/Release/city=Seattle"
blob_sas_token = r""
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
blob_sas_token)
print('Remote blob path: ' + wasbs_path)
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))
Azure Synapse
# This is a package in preview.
from azureml.opendatasets import SeattleSafety
from datetime import datetime
from dateutil import parser
end_date = parser.parse('2016-01-01')
start_date = parser.parse('2015-05-01')
safety = SeattleSafety(start_date=start_date, end_date=end_date)
safety = safety.to_spark_dataframe()
# Display top 5 rows
display(safety.limit(5))
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "citydatacontainer"
blob_relative_path = "Safety/Release/city=Seattle"
blob_sas_token = r""
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
blob_sas_token)
print('Remote blob path: ' + wasbs_path)
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))
SELECT TOP 100 * FROM OPENROWSET( BULK 'https://azureopendatastorage.blob.core.windows.net/citydatacontainer/Safety/Release/city=Seattle/*.parquet', FORMAT = 'parquet' ) AS [r];

City Safety
From the Urban Innovation Initiative at Microsoft Research, databricks notebook for analytics with safety data (311 and 911 call data) from major U.S. cities. Analyses show frequency distributions and geographic clustering of safety issues within cities.