略過導覽

US Producer Price Index - Industry

labor statistics ppi industry

生產者物價指數 (PPI) 用於衡量一段時間內國內生產者就其生產所獲售價的平均變化情況。 PPI 中包含的價格,來自其產品和服務涵蓋的第一次商業交易。

生產者物價指數修訂系列指數可反應生產者淨生產的價格變動,其是以北美洲產業分類系統 (North American Industry Classification System, NAICS) 的歸類為依據。 此 PC 資料集與多種 NAICS 經濟時間數列相容,包括產能、生產情況、就業情況、薪資及收入等等。

PPI 包含美國經濟商品生產業中所有產業的生產 (礦業、製造業、農業、漁業和林業),和天然氣、電力、建築,以及與生產業所生產之商品相比具有競爭力的商品,例如廢棄物和廢料。 此外,從 2011 年 1 月 1 日起,PPI 計劃就涵蓋服務業總產值的四分之三,所選產業的發行資料包含下列產業:批發和零售貿易、運輸和倉儲、資訊、金融和保險、房地產經紀、出租和租賃、專業、科學和技術服務、行政管理、支援和廢棄物管理服務、醫療保健和社會救助、住宿。

https://download.bls.gov/pub/time.series/wp/wp.txt 讀我檔案 是包含此資料集詳細資訊的檔案,位於原始資料集位置。 如需其他資訊,請參閱常見問題集

此資料集來自美國勞工統計局 (BLS) 所發佈的生產者物價指數資料。 如需此資料集相關的使用條款及條件,請參閱 Copyright Information (連結與著作權資訊) 及 Important Web Site Notices (重要網站聲明)。

儲存位置

此資料集儲存於美國東部 Azure 區域。 建議您在美國東部配置計算資源,以確保同質性。

相關資料集

通知

Microsoft 係依「現況」提供 Azure 開放資料集。 針對 貴用戶對資料集的使用,Microsoft 不提供任何明示或默示的擔保、保證或條件。 在 貴用戶當地法律允許的範圍內,針對因使用資料集而導致的任何直接性、衍生性、特殊性、間接性、附隨性或懲罰性損害或損失,Microsoft 概不承擔任何責任。

此資料集是根據 Microsoft 接收來源資料的原始條款所提供。 資料集可能包含源自 Microsoft 的資料。

Access

Available inWhen to use
Azure Notebooks

Quickly explore the dataset with Jupyter notebooks hosted on Azure or your local machine.

Azure Databricks

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Azure Synapse

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Preview

product_code industry_code series_id year period value footnote_codes seasonal series_title industry_name product_name
2123240 212324 PCU2123242123240 1998 M01 117 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M02 116.9 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M03 116.3 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M04 116 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M05 116.2 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M06 116.3 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M07 116.6 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M08 116.3 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M09 116.2 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
2123240 212324 PCU2123242123240 1998 M10 115.9 nan U PPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjusted Kaolin and ball clay mining Kaolin and ball clay
Name Data type Unique Values (sample) Description
footnote_codes string 3 nan
P

識別資料數列的註腳。 大部分的值為 Null。 請參閱 https://download.bls.gov/pub/time.series/pc/pc.footnote。

industry_code string 1,064 221122
325412

NAICS 產業代碼。 請參閱 https://download.bls.gov/pub/time.series/pc/pc.industry 中的代碼及名稱。

industry_name string 842 Electric power distribution
Pharmaceutical preparation manufacturing

對應於產業代碼的名稱。 請參閱 https://download.bls.gov/pub/time.series/pc/pc.industry 中的代碼及名稱。

period string 13 M06
M07

識別觀測資料的週期。 如需完整清單,請參閱 https://download.bls.gov/pub/time.series/pc/pc.period 。

product_code string 4,822 335129
311514P

識別資料觀測所參考的產品代碼。 請參閱 https://download.bls.gov/pub/time.series/pc/pc.product 中的產業代碼、產品代碼與產品名稱對應。

product_name string 3,313 Primary products
Secondary products

資料觀測所參考的產品名稱。 請參閱 https://download.bls.gov/pub/time.series/pc/pc.product 中的產業代碼、產品代碼與產品名稱對應。

seasonal string 1 U

識別資料是否季節性調整的代碼。 S=季節性調整;U=未調整

series_id string 4,822 PCU22121022121012
PCU221122221122439

識別特定序列的代碼。 時間序列是指在持續時間間隔內的較長一段時間中,所觀測到的一組資料。 如需代碼、名稱、起迄年份等數列的詳細資料,請參閱 https://download.bls.gov/pub/time.series/pc/pc.series 。

series_title string 4,588 PPI industry data for Electric power distribution-East North Central, not seasonally adjusted
PPI industry data for Electric power distribution-Pacific, not seasonally adjusted
value float 7,658 100.0
100.4000015258789

商品的物價指數。

year int 22 2015
2017

識別觀測年份。

Select your preferred service:

Azure Notebooks

Azure Databricks

Azure Synapse

Azure Notebooks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import UsLaborPPIIndustry

labor = UsLaborPPIIndustry()
labor_df = labor.to_pandas_dataframe()
ActivityStarted, to_pandas_dataframe
ActivityStarted, to_pandas_dataframe_in_worker
Looking for parquet files...
Reading them into Pandas dataframe...
Reading ppi_industry/part-00000-tid-1761562550540733469-da319923-1af6-4884-a5f4-16397508d15f-4596-1-c000.snappy.parquet under container laborstatisticscontainer
Done.
ActivityCompleted: Activity=to_pandas_dataframe_in_worker, HowEnded=Success, Duration=7978.44 [ms]
ActivityCompleted: Activity=to_pandas_dataframe, HowEnded=Success, Duration=8014.64 [ms]
In [2]:
labor_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 948634 entries, 0 to 948633
Data columns (total 11 columns):
product_code      948634 non-null object
industry_code     948634 non-null object
series_id         948634 non-null object
year              948634 non-null int32
period            948634 non-null object
value             948634 non-null float32
footnote_codes    948634 non-null object
seasonal          948634 non-null object
series_title      948634 non-null object
industry_name     948634 non-null object
product_name      948634 non-null object
dtypes: float32(1), int32(1), object(9)
memory usage: 72.4+ MB
In [1]:
# Pip install packages
import os, sys

!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
In [2]:
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "laborstatisticscontainer"
folder_name = "ppi_industry/"
In [3]:
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient

if azure_storage_account_name is None or azure_storage_sas_token is None:
    raise Exception(
        "Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")

print('Looking for the first parquet under the folder ' +
      folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
    container_url, azure_storage_sas_token if azure_storage_sas_token else None)

container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
    if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
        targetBlobName = blob.name
        break

print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
    blob_client.download_blob().download_to_stream(local_file)
In [4]:
# Read the parquet file into Pandas data frame
import pandas as pd

print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
In [5]:
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
In [6]:
 

Azure Databricks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import UsLaborPPIIndustry

labor = UsLaborPPIIndustry()
labor_df = labor.to_spark_dataframe()
ActivityStarted, to_spark_dataframe ActivityStarted, to_spark_dataframe_in_worker ActivityCompleted: Activity=to_spark_dataframe_in_worker, HowEnded=Success, Duration=2665.84 [ms] ActivityCompleted: Activity=to_spark_dataframe, HowEnded=Success, Duration=2668.22 [ms]
In [2]:
display(labor_df.limit(5))
product_codeindustry_codeseries_idyearperiodvaluefootnote_codesseasonalseries_titleindustry_nameproduct_name
2123240212324PCU2123242123240 1998M01117.0nanUPPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjustedKaolin and ball clay miningKaolin and ball clay
2123240212324PCU2123242123240 1998M02116.9nanUPPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjustedKaolin and ball clay miningKaolin and ball clay
2123240212324PCU2123242123240 1998M03116.3nanUPPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjustedKaolin and ball clay miningKaolin and ball clay
2123240212324PCU2123242123240 1998M04116.0nanUPPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjustedKaolin and ball clay miningKaolin and ball clay
2123240212324PCU2123242123240 1998M05116.2nanUPPI industry data for Kaolin and ball clay mining-Kaolin and ball clay, not seasonally adjustedKaolin and ball clay miningKaolin and ball clay
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "laborstatisticscontainer"
blob_relative_path = "ppi_industry/"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))

Azure Synapse

Package: Language: Python
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "laborstatisticscontainer"
blob_relative_path = "ppi_industry/"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))