跳过导航

NYC Taxi & Limousine Commission - green taxi trip records

NYC TLC Taxi green

绿色的出租车行程记录包括捕获以下信息的字段:上车和下车日期/时间、上车和下车位置、行程距离、逐条记录的车费、费率类型、付款类型和司机报告的乘客数。

数量和保留期

此数据集以 Parquet 格式存储。 截至 2018 年,总共约有 8000 万行 (2 GB)。

此数据集包括从 2009 年到 2018 年累积的历史记录。 可使用我们的 SDK 中的参数设置来提取特定时间范围内的数据。

存储位置

此数据集存储在美国东部 Azure 区域。 建议将计算资源分配到美国东部地区,以实现相关性。

其他信息

纽约出租车和豪华轿车委员会 (TLC):

数据是由 Taxicab & Livery Passenger Enhancement Programs (TPEP/LPEP) 授权的技术提供商收集并提供给纽约出租车和豪华轿车委员会 (TLC)。 行程数据不是由 TLC 创建的,因此 TLC 不对这些数据的准确性做任何声明。

有关 TLC 行程记录数据的其他信息,请参阅 此处此处

通知

Microsoft 以“原样”为基础提供 AZURE 开放数据集。 Microsoft 对数据集的使用不提供任何担保(明示或暗示)、保证或条件。 在当地法律允许的范围内,Microsoft 对使用数据集而导致的任何损害或损失不承担任何责任,包括直接、必然、特殊、间接、偶发或惩罚。

此数据集是根据 Microsoft 接收源数据的原始条款提供的。 数据集可能包含来自 Microsoft 的数据。

Access

Available inWhen to use
Azure Notebooks

Quickly explore the dataset with Jupyter notebooks hosted on Azure or your local machine.

Azure Databricks

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Azure Synapse

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Preview

vendorID lpepPickupDatetime lpepDropoffDatetime passengerCount tripDistance puLocationId doLocationId rateCodeID storeAndFwdFlag paymentType fareAmount extra mtaTax improvementSurcharge tipAmount tollsAmount totalAmount tripType puYear puMonth
2 6/24/2081 5:40:37 PM 6/24/2081 6:42:47 PM 1 16.95 93 117 1 N 1 52 1 0.5 0.3 0 2.16 55.96 1 2081 6
2 11/28/2030 12:19:29 AM 11/28/2030 12:25:37 AM 1 1.08 42 247 1 N 2 6.5 0 0.5 0.3 0 0 7.3 1 2030 11
2 11/28/2030 12:14:50 AM 11/28/2030 12:14:54 AM 1 0.03 42 42 5 N 2 5 0 0 0 0 0 5 2 2030 11
2 11/14/2020 11:38:07 AM 11/14/2020 11:42:22 AM 1 0.63 129 129 1 N 2 4.5 1 0.5 0.3 0 0 6.3 1 2020 11
2 11/14/2020 9:55:36 AM 11/14/2020 10:04:54 AM 1 3.8 82 138 1 N 2 12.5 1 0.5 0.3 0 0 14.3 1 2020 11
2 8/26/2019 4:18:37 PM 8/26/2019 4:19:35 PM 1 0 264 264 1 N 2 1 0 0.5 0.3 0 0 1.8 1 2019 8
2 7/1/2019 8:28:33 AM 7/1/2019 8:32:33 AM 1 0.71 7 7 1 N 1 5 0 0.5 0.3 1.74 0 7.54 1 2019 7
2 7/1/2019 12:04:53 AM 7/1/2019 12:21:56 AM 1 2.71 223 145 1 N 2 13 0.5 0.5 0.3 0 0 14.3 1 2019 7
2 7/1/2019 12:04:11 AM 7/1/2019 12:21:15 AM 1 3.14 166 142 1 N 2 14.5 0.5 0.5 0.3 0 0 18.55 1 2019 7
2 7/1/2019 12:03:37 AM 7/1/2019 12:09:27 AM 1 0.78 74 74 1 N 1 6 0.5 0.5 0.3 1.46 0 8.76 1 2019 7
Name Data type Unique Values (sample) Description
doLocationId string 264 74
42

未使用出租车计价器的 DOLocationID TLC 出租车区域。

dropoffLatitude double 109,721 40.7743034362793
40.77431869506836

从 2016 年 7 月起弃用

dropoffLongitude double 75,502 -73.95272827148438
-73.95274353027344

从 2016 年 7 月起弃用

extra double 202 0.5
1.0

其他杂费和附加费。 目前,这仅包括 0.50 美元和 1 美元的高峰时段费和跨夜费用。

fareAmount double 10,367 6.0
5.5

计价器计算的时间和距离费用。

improvementSurcharge string 92 0.3
0

已针对路边招呼行程的起步价征收 0.30 美元的改进附加费。 自 2015 年起开始征收改进附加费。

lpepDropoffDatetime timestamp 58,100,713 2016-05-22 00:00:00
2016-05-09 00:00:00

未使用计价器的日期和时间。

lpepPickupDatetime timestamp 58,157,349 2013-10-22 12:40:36
2014-08-09 15:54:25

采用了计价器的日期和时间。

mtaTax double 34 0.5
-0.5

根据使用的计量费率自动触发的 0.50 美元 MTA 税。

passengerCount int 10 1
2

车辆中的乘客人数。

这是驾驶员输入的值。

paymentType int 5 2
1

表示乘客如何支付行程费用的数字代码。

1 = 信用卡

2 = 现金

3 = 免费

4 = 争议

5 = 未知

6 = 作废的行程

pickupLatitude double 95,110 40.721351623535156
40.721336364746094

从 2016 年 7 月起弃用

pickupLongitude double 55,722 -73.84429931640625
-73.84429168701172

从 2016 年 7 月起弃用

puLocationId string 264 74
41

使用了出租车计价器的 TLC 出租车区域。

puMonth int 12 3
5
puYear int 14 2015
2016
rateCodeID int 7 1
5

行程结束时实行的最终费率代码。

1= 标准费率

2= JFK

3= 纽瓦克

4= 拿骚或韦斯特切斯特

5= 协商票价

6= 集体乘车

storeAndFwdFlag string 2 N
Y

此标志指示是否先将行程记录保存在车辆内存中,然后再发送到供应商(也称为“存储并转发”),因为车辆未连接到服务器。

Y = 存储并转发行程

N = 非存储并转发行程

tipAmount double 6,206 1.0
2.0

小费金额 - 此字段自动填充信用卡小费。 不包括现金小费。

tollsAmount double 2,150 5.54
5.76

行程中支付的所有通行费总额。

totalAmount double 20,188 7.8
6.8

向乘客收取的总金额。 不含现金小费。

tripDistance double 7,060 0.9
1.0

出租车计价器报告的所经过的行程距离(以英里为单位)。

tripType int 3 1
2

指示行程是在路边招呼的还是根据使用的计量费率自动分配的调度的代码,驾驶员可以更改此代码。

1 = 路边招呼

2 = 调度

vendorID int 2 2
1

指示提供记录的 LPEP 提供商的代码。

1 = Creative Mobile Technologies, LLC;

2 = VeriFone Inc.

Select your preferred service:

Azure Notebooks

Azure Databricks

Azure Synapse

Azure Notebooks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import NycTlcGreen

from datetime import datetime
from dateutil import parser


end_date = parser.parse('2018-06-06')
start_date = parser.parse('2018-05-01')
nyc_tlc = NycTlcGreen(start_date=start_date, end_date=end_date)
nyc_tlc_df = nyc_tlc.to_pandas_dataframe()
ActivityStarted, to_pandas_dataframe ActivityStarted, to_pandas_dataframe_in_worker Target paths: ['/puYear=2018/puMonth=5/', '/puYear=2018/puMonth=6/'] Looking for parquet files... Reading them into Pandas dataframe... Reading green/puYear=2018/puMonth=5/part-00087-tid-6037743401120983271-619c4849-c957-4290-a1b8-66832cb385b6-12506.c000.snappy.parquet under container nyctlc Reading green/puYear=2018/puMonth=6/part-00171-tid-6037743401120983271-619c4849-c957-4290-a1b8-66832cb385b6-12590.c000.snappy.parquet under container nyctlc Done. ActivityCompleted: Activity=to_pandas_dataframe_in_worker, HowEnded=Success, Duration=5555.67 [ms] ActivityCompleted: Activity=to_pandas_dataframe, HowEnded=Success, Duration=5559.68 [ms]
In [2]:
nyc_tlc_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 923257 entries, 0 to 498783 Data columns (total 23 columns): vendorID 923257 non-null int32 lpepPickupDatetime 923257 non-null datetime64[ns] lpepDropoffDatetime 923257 non-null datetime64[ns] passengerCount 923257 non-null int32 tripDistance 923257 non-null float64 puLocationId 923257 non-null object doLocationId 923257 non-null object pickupLongitude 0 non-null float64 pickupLatitude 0 non-null float64 dropoffLongitude 0 non-null float64 dropoffLatitude 0 non-null float64 rateCodeID 923257 non-null int32 storeAndFwdFlag 923257 non-null object paymentType 923257 non-null int32 fareAmount 923257 non-null float64 extra 923257 non-null float64 mtaTax 923257 non-null float64 improvementSurcharge 923257 non-null object tipAmount 923257 non-null float64 tollsAmount 923257 non-null float64 ehailFee 0 non-null float64 totalAmount 923257 non-null float64 tripType 923257 non-null int32 dtypes: datetime64[ns](2), float64(12), int32(5), object(4) memory usage: 151.4+ MB
In [1]:
# Pip install packages
import os, sys

!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
In [2]:
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "nyctlc"
folder_name = "green"
In [3]:
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient

if azure_storage_account_name is None or azure_storage_sas_token is None:
    raise Exception(
        "Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")

print('Looking for the first parquet under the folder ' +
      folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
    container_url, azure_storage_sas_token if azure_storage_sas_token else None)

container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
    if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
        targetBlobName = blob.name
        break

print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
    blob_client.download_blob().download_to_stream(local_file)
In [4]:
# Read the parquet file into Pandas data frame
import pandas as pd

print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
In [5]:
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
In [6]:
 

Azure Databricks

Package: Language: Python Python
In [1]:
# This is a package in preview.
# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import NycTlcGreen

from datetime import datetime
from dateutil import parser


end_date = parser.parse('2018-06-06')
start_date = parser.parse('2018-05-01')
nyc_tlc = NycTlcGreen(start_date=start_date, end_date=end_date)
nyc_tlc_df = nyc_tlc.to_spark_dataframe()
ActivityStarted, to_spark_dataframe ActivityStarted, to_spark_dataframe_in_worker ActivityCompleted: Activity=to_spark_dataframe_in_worker, HowEnded=Success, Duration=47328.45 [ms] ActivityCompleted: Activity=to_spark_dataframe, HowEnded=Success, Duration=47332.79 [ms]
In [2]:
display(nyc_tlc_df.limit(5))
vendorIDlpepPickupDatetimelpepDropoffDatetimepassengerCounttripDistancepuLocationIddoLocationIdpickupLongitudepickupLatitudedropoffLongitudedropoffLatituderateCodeIDstoreAndFwdFlagpaymentTypefareAmountextramtaTaximprovementSurchargetipAmounttollsAmountehailFeetotalAmounttripTypepuYearpuMonth
22018-05-23T23:14:19.000+00002018-05-23T23:17:43.000+000010.611642nullnullnullnull1N14.50.00.50.30.010.0null5.31120185
22018-05-23T23:24:21.000+00002018-05-23T23:33:00.000+000011.1442116nullnullnullnull1N17.00.00.50.31.560.0null9.36120185
22018-05-07T08:52:57.000+00002018-05-08T03:14:08.000+000011.27119247nullnullnullnull1N27.50.00.50.30.00.0null8.3120185
22018-05-07T03:16:20.000+00002018-05-07T03:39:26.000+000013.8224718nullnullnullnull1N117.50.00.50.30.00.0null18.3120185
22018-05-07T03:40:25.000+00002018-05-07T03:46:11.000+000010.9818136nullnullnullnull1N26.00.00.50.30.00.0null6.8120185
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "nyctlc"
blob_relative_path = "green"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))

Azure Synapse

Package: Language: Python Python
In [6]:
# This is a package in preview.
from azureml.opendatasets import NycTlcGreen

from datetime import datetime
from dateutil import parser


end_date = parser.parse('2018-06-06')
start_date = parser.parse('2018-05-01')
nyc_tlc = NycTlcGreen(start_date=start_date, end_date=end_date)
nyc_tlc_df = nyc_tlc.to_spark_dataframe()
In [7]:
# Display top 5 rows
display(nyc_tlc_df.limit(5))
Out[7]:
In [9]:
# Display data statistic information
display(nyc_tlc_df, summary = True)
Out[9]:
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "nyctlc"
blob_relative_path = "green"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))