略過導覽

US Producer Price Index - Commodities

labor statistics ppi commodity

生產者物價指數 (PPI) 用於衡量一段時間內國內生產者就其生產所獲售價的平均變化情況。 PPI 中包含的價格取自所涵蓋之產品和服務的第一次商業交易。

每月會發佈大約 10,000 個個別產品和產品群的 PPI。 PPI 適用於美國經濟中商品生產業中幾乎所有產業的生產 (礦業、製造業、農業、漁業和林業),還有天然氣、電力、建造,以及與生產業所生產商品相比具有競爭力的商品,例如廢棄物和廢料。 根據 2007 年經濟普查中報告的收入,PPI 計劃涵蓋約 72% 的服務業總產值。 資料包含下列產業:批發和零售貿易、運輸和倉儲、資訊、金融和保險、房地產經紀、出租和租賃、專業、科學和技術服務、行政管理、支援和廢棄物管理服務、醫療保健和社會救助、住宿。

https://download.bls.gov/pub/time.series/wp/wp.txt 讀我檔案 是包含此資料集詳細資訊的檔案,位於原始資料集位置。 如需其他資訊,請參閱常見問題集

此資料集來自美國勞工統計局 (BLS) 所發佈的生產者物價指數資料。 如需此資料集相關的使用條款及條件,請參閱 Copyright Information (連結與著作權資訊) 及 Important Web Site Notices (重要網站聲明)。

儲存位置

此資料集儲存於美國東部 Azure 區域。 建議您在美國東部配置計算資源,以確保同質性。

相關資料集

通知

Microsoft 係依「現況」提供 Azure 開放資料集。 針對 貴用戶對資料集的使用,Microsoft 不提供任何明示或默示的擔保、保證或條件。 在 貴用戶當地法律允許的範圍內,針對因使用資料集而導致的任何直接性、衍生性、特殊性、間接性、附隨性或懲罰性損害或損失,Microsoft 概不承擔任何責任。

此資料集是根據 Microsoft 接收來源資料的原始條款所提供。 資料集可能包含源自 Microsoft 的資料。

Access

Available inWhen to use
Azure Notebooks

Quickly explore the dataset with Jupyter notebooks hosted on Azure or your local machine.

Azure Databricks

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Azure Synapse

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Preview

item_code group_code series_id year period value footnote_codes seasonal series_title group_name item_name
120922 05 WPU05120922 2008 M06 100 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M07 104.6 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M08 104.4 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M09 98.3 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M10 101.5 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M11 95.2 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M12 96.7 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2009 M01 104.2 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2009 M02 113.2 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2009 M03 121 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
Name Data type Unique Values (sample) Description
footnote_codes string 3 nan
P

識別資料數列的註腳。 大部分的值為 Null。 請參閱 https://download.bls.gov/pub/time.series/wp/wp.footnote。

group_code string 56 02
01

識別索引涵蓋之主要商品群組名稱的代碼。 請參閱 https://download.bls.gov/pub/time.series/wp/wp.group 中的群組代碼與名稱。

group_name string 56 Processed foods and feeds
Farm products

索引涵蓋的主要商品群組名稱。 請參閱 https://download.bls.gov/pub/time.series/wp/wp.group 中的群組代碼與名稱。

item_code string 2,949 1
11

識別與資料觀測有關的項目。 請參閱 https://download.bls.gov/pub/time.series/wp/wp.item 中的項目代碼及名稱。

item_name string 3,410 Warehousing, storage, and related services
Security guard services

項目的完整名稱。 請參閱 https://download.bls.gov/pub/time.series/wp/wp.item 中的項目代碼及名稱。

period string 13 M06
M07

識別觀測資料的週期。 請參閱 https://download.bls.gov/pub/time.series/wp/wp.period 中的期間值清單。

seasonal string 2 U
S

識別資料是否季節性調整的代碼。 S=季節性調整;U=未調整

series_id string 5,458 WPU601
WPU011

識別特定序列的代碼。 時間序列是指在持續時間間隔內的較長一段時間中,所觀測到的一組資料。 如需代碼、名稱、起迄年份等數列的詳細資料,請參閱 https://download.bls.gov/pub/time.series/wp/wp.series 。

series_title string 4,379 PPI Commodity data for Metal treatment services, not seasonally adjusted
PPI Commodity data for Mining services, not seasonally adjusted

特定序列的標題。 時間序列是指在持續時間間隔內的較長一段時間中,所觀測到的一組資料。 如需識別碼、名稱、起迄年份等數列詳細資料,請參閱 https://download.bls.gov/pub/time.series/wp/wp.series 。

value float 6,788 100.0
99.0999984741211

商品的物價指數。

year int 26 2018
2017

識別觀測年份。

Select your preferred service:

Azure Notebooks

Azure Databricks

Azure Synapse

Azure Notebooks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import UsLaborPPICommodity

labor = UsLaborPPICommodity()
labor_df = labor.to_pandas_dataframe()
ActivityStarted, to_pandas_dataframe
ActivityStarted, to_pandas_dataframe_in_worker
Looking for parquet files...
Reading them into Pandas dataframe...
Reading ppi_commodity/part-00000-tid-160579496407747812-077bf440-b39a-4520-9373-0a3f021dd59e-5654-1-c000.snappy.parquet under container laborstatisticscontainer
Done.
ActivityCompleted: Activity=to_pandas_dataframe_in_worker, HowEnded=Success, Duration=20409.23 [ms]
ActivityCompleted: Activity=to_pandas_dataframe, HowEnded=Success, Duration=20434.79 [ms]
In [2]:
labor_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6825676 entries, 0 to 6825675
Data columns (total 11 columns):
item_code         object
group_code        object
series_id         object
year              int32
period            object
value             float32
footnote_codes    object
seasonal          object
series_title      object
group_name        object
item_name         object
dtypes: float32(1), int32(1), object(9)
memory usage: 520.8+ MB
In [1]:
# Pip install packages
import os, sys

!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
In [2]:
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "laborstatisticscontainer"
folder_name = "ppi_commodity/"
In [3]:
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient

if azure_storage_account_name is None or azure_storage_sas_token is None:
    raise Exception(
        "Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")

print('Looking for the first parquet under the folder ' +
      folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
    container_url, azure_storage_sas_token if azure_storage_sas_token else None)

container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
    if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
        targetBlobName = blob.name
        break

print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
    blob_client.download_blob().download_to_stream(local_file)
In [4]:
# Read the parquet file into Pandas data frame
import pandas as pd

print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
In [5]:
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
In [6]:
 

Azure Databricks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import UsLaborPPICommodity

labor = UsLaborPPICommodity()
labor_df = labor.to_spark_dataframe()
ActivityStarted, to_spark_dataframe ActivityStarted, to_spark_dataframe_in_worker ActivityCompleted: Activity=to_spark_dataframe_in_worker, HowEnded=Success, Duration=2871.21 [ms] ActivityCompleted: Activity=to_spark_dataframe, HowEnded=Success, Duration=2875.06 [ms]
In [2]:
display(labor_df.limit(5))
item_codegroup_codeseries_idyearperiodvaluefootnote_codesseasonalseries_titlegroup_nameitem_name
12092205WPU05120922 2008M06100.0nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M07104.6nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M08104.4nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M0998.3nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M10101.5nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "laborstatisticscontainer"
blob_relative_path = "ppi_commodity/"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))

Azure Synapse

Package: Language: Python
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "laborstatisticscontainer"
blob_relative_path = "ppi_commodity/"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))