Skip Navigation

San Francisco Safety Data

San Francisco 911 Fire 311 Public Safety

Fire department calls for service and 311 cases in San Francisco.

Fire Calls-For-Service includes all fire unit responses to calls. Each record includes the call number, incident number, address, unit identifier, call type, and disposition. All relevant time intervals are also included. Because this dataset is based on responses, and since most calls involved multiple units, there are multiple records for each call number. Addresses are associated with a block number, intersection or call box, not a specific address.

311 Cases includes cases generally associated with a place or thing (for example parks, streets, or buildings) and created July 1, 2008 or later. Cases generally logged by a user regarding their own needs (for example, property or business tax questions, parking permit requests) are not included. See the Program Link for more information.

Volume and Retention

This dataset is stored in Parquet format. It is updated daily with about 6M rows (400MB) as of 2019.

This dataset contains historical records accumulated from 2015 to the present. You can use parameter settings in our SDK to fetch data within a specific time range.

Storage Location

This dataset is stored in the East US Azure region. Allocating compute resources in East US is recommended for affinity.

Additional Information

This dataset is sourced from city of San Francisco government. More details can be found from the following links: Fire Department Calls, 311 Cases.

Reference here for the terms of using this dataset.

Notices

MICROSOFT PROVIDES AZURE OPEN DATASETS ON AN “AS IS” BASIS. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, GUARANTEES OR CONDITIONS WITH RESPECT TO YOUR USE OF THE DATASETS. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAW, MICROSOFT DISCLAIMS ALL LIABILITY FOR ANY DAMAGES OR LOSSES, INCLUDING DIRECT, CONSEQUENTIAL, SPECIAL, INDIRECT, INCIDENTAL OR PUNITIVE, RESULTING FROM YOUR USE OF THE DATASETS.

This dataset is provided under the original terms that Microsoft received source data. The dataset may include data sourced from Microsoft.

Access

Available inWhen to use
Azure Notebooks

Quickly explore the dataset with Jupyter notebooks hosted on Azure or your local machine.

Azure Databricks

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Azure Synapse

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Preview

dataType dataSubtype dateTime category subcategory status address latitude longitude source extendedProperties
Safety 911_Fire 6/17/2021 2:44:16 AM Potentially Life-Threatening Medical Incident null 15TH ST/JULIAN AV 37.7665992188642 -122.421056630869 null
Safety 911_Fire 6/17/2021 2:43:28 AM Potentially Life-Threatening Medical Incident null 2400 Block of 20TH AVE 37.7420672879133 -122.476766376613 null
Safety 911_Fire 6/17/2021 2:43:28 AM Potentially Life-Threatening Medical Incident null 2400 Block of 20TH AVE 37.7420672879133 -122.476766376613 null
Safety 911_Fire 6/17/2021 2:43:28 AM Potentially Life-Threatening Medical Incident null 2400 Block of 20TH AVE 37.7420672879133 -122.476766376613 null
Safety 911_Fire 6/17/2021 2:41:33 AM Non Life-threatening Medical Incident null 800 Block of INNES AVE 37.731421657054 -122.374864504136 null
Safety 911_Fire 6/17/2021 2:24:11 AM Non Life-threatening Medical Incident null 700 Block of MISSOURI ST 37.7572418129803 -122.395894468026 null
Safety 911_Fire 6/17/2021 2:24:04 AM Non Life-threatening Medical Incident null LA PLAYA/JUDAH ST 37.7602823232259 -122.509141220867 null
Safety 911_Fire 6/17/2021 2:20:41 AM Potentially Life-Threatening Medical Incident null 400 Block of MINNA ST 37.7819492675836 -122.406272435641 null
Safety 911_Fire 6/17/2021 2:20:41 AM Potentially Life-Threatening Medical Incident null 400 Block of MINNA ST 37.7819492675836 -122.406272435641 null
Safety 911_Fire 6/17/2021 2:20:41 AM Potentially Life-Threatening Medical Incident null 400 Block of MINNA ST 37.7819492675836 -122.406272435641 null
Name Data type Unique Values (sample) Description
address string 283,413 Not associated with a specific address
0 Block of 6TH ST

Address of incident (note: address and location generalized to mid-block of street, intersection or nearest call box location, to protect caller privacy).

category string 108 Street and Sidewalk Cleaning
Potentially Life-Threatening

The human readable name of the 311 service request type or call type group for 911 fire calls.

dataSubtype string 2 911_Fire
311_All

“911_Fire” or “311_All”.

dataType string 1 Safety

“Safety”

dateTime timestamp 6,550,564 2020-07-28 06:40:26
2016-06-18 14:19:13

The date and time when the service request was made or when the fire call was received.

latitude double 1,647,832 37.777624238929
37.786117211838

Latitude of the location, using the WGS84 projection.

longitude double 1,585,922 -122.39998111124
-122.419854245692

Longitude of the location, using the WGS84 projection.

source string 9 Phone
Mobile/Open311

Mechanism or path by which the service request was received; typically “Phone”, “Text/SMS”, “Website”, “Mobile App”, “Twitter”, etc. but terms may vary by system.

status string 3 Closed
Open

A single-word indicator of the current state of the service request. (Note: GeoReport V2 only permits “open” and “closed”)

subcategory string 1,271 Medical Incident
Bulky Items

The human readable name of the service request subtype for 311 cases or call type for 911 fire calls.

Select your preferred service:

Azure Notebooks

Azure Databricks

Azure Synapse

Azure Notebooks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import SanFranciscoSafety

from datetime import datetime
from dateutil import parser


end_date = parser.parse('2016-01-01')
start_date = parser.parse('2015-05-01')
safety = SanFranciscoSafety(start_date=start_date, end_date=end_date)
safety = safety.to_pandas_dataframe()
ActivityStarted, to_pandas_dataframe Looking for parquet files... Reading them into Pandas dataframe... Reading Safety/Release/city=SanFrancisco/part-00125-tid-8598556649077331715-e7875271-3301-48fe-88c1-a6ce35841072-136781.c000.snappy.parquet under container citydatacontainer Done. ActivityCompleted: Activity=to_pandas_dataframe, HowEnded=Success, Duration=58673.14 [ms]
In [2]:
safety.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 361411 entries, 10 to 5821034 Data columns (total 11 columns): dataType 361411 non-null object dataSubtype 361411 non-null object dateTime 361411 non-null datetime64[ns] category 361409 non-null object subcategory 361411 non-null object status 231935 non-null object address 361411 non-null object latitude 361411 non-null float64 longitude 361411 non-null float64 source 231935 non-null object extendedProperties 117871 non-null object dtypes: datetime64[ns](1), float64(2), object(8) memory usage: 33.1+ MB
In [1]:
# Pip install packages
import os, sys

!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
In [2]:
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "citydatacontainer"
folder_name = "Safety/Release/city=SanFrancisco"
In [3]:
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient

if azure_storage_account_name is None or azure_storage_sas_token is None:
    raise Exception(
        "Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")

print('Looking for the first parquet under the folder ' +
      folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
    container_url, azure_storage_sas_token if azure_storage_sas_token else None)

container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
    if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
        targetBlobName = blob.name
        break

print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
    blob_client.download_blob().download_to_stream(local_file)
In [4]:
# Read the parquet file into Pandas data frame
import pandas as pd

print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
In [5]:
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
In [6]:
 

Azure Databricks

Package: Language: Python Python
In [1]:
# This is a package in preview.
# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import SanFranciscoSafety

from datetime import datetime
from dateutil import parser


end_date = parser.parse('2016-01-01')
start_date = parser.parse('2015-05-01')
safety = SanFranciscoSafety(start_date=start_date, end_date=end_date)
safety = safety.to_spark_dataframe()
ActivityStarted, to_spark_dataframe ActivityStarted, to_spark_dataframe_in_worker ActivityCompleted: Activity=to_spark_dataframe_in_worker, HowEnded=Success, Duration=3754.51 [ms] ActivityCompleted: Activity=to_spark_dataframe, HowEnded=Success, Duration=3757.76 [ms]
In [2]:
display(safety.limit(5))
dataTypedataSubtypedateTimecategorysubcategorystatusaddresslatitudelongitudesourceextendedProperties
Safety911_Fire2015-11-07T19:49:04.000+0000Potentially Life-ThreateningMedical IncidentnullMISSION ST/23RD ST37.753836588542-122.418593946321nullnull
Safety911_Fire2015-08-06T05:23:02.000+0000AlarmAlarmsnull200 Block of 10TH ST37.773466489733-122.413546904215nullnull
Safety911_Fire2015-07-28T13:34:52.000+0000Potentially Life-ThreateningMedical IncidentnullHOWARD ST/MAIN ST37.790612669554-122.393407939021nullnull
Safety911_Fire2015-06-24T10:39:57.000+0000Non Life-threateningMedical Incidentnull200 Block of BRIDGEVIEW DR37.734209339882-122.397590096788nullnull
Safety911_Fire2015-06-22T15:58:28.000+0000AlarmAlarmsnull100 Block of POST ST37.788796325286-122.403991276137nullnull
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "citydatacontainer"
blob_relative_path = "Safety/Release/city=SanFrancisco"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))

Azure Synapse

Package: Language: Python Python
In [18]:
# This is a package in preview.
from azureml.opendatasets import SanFranciscoSafety

from datetime import datetime
from dateutil import parser


end_date = parser.parse('2016-01-01')
start_date = parser.parse('2015-05-01')
safety = SanFranciscoSafety(start_date=start_date, end_date=end_date)
safety = safety.to_spark_dataframe()
In [19]:
# Display top 5 rows
display(safety.limit(5))
Out[19]:
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "citydatacontainer"
blob_relative_path = "Safety/Release/city=SanFrancisco"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))

City Safety

From the Urban Innovation Initiative at Microsoft Research, databricks notebook for analytics with safety data (311 and 911 call data) from major U.S. cities. Analyses show frequency distributions and geographic clustering of safety issues within cities.