Hoppa över navigering

US Producer Price Index - Commodities

labor statistics ppi commodity

Producentprisindex (PPI) är ett mått på den genomsnittliga förändringen över tid i de försäljningspriser som inhemska producenter erhåller. PPI-priserna hämtas från den första kommersiella transaktionen för de produkter och tjänster som omfattas.

Cirka 10 000 PPI-värden för separata produkter och produktgrupper släpps varje månad. Det finns PPI:er från nästan alla branscher i varuproducerande sektorer i USA – gruvdrift, tillverkning, agrikultur, fiske och skogsbruk – samt naturgas, elektricitet, byggbransch och varor som kan jämföras med producerande sektorer som avfalls- och skrotmaterial. PPI-programmet omfattar cirka 72 procent av tjänstesektorns utdata, vilket mäts enligt de intäkter som rapporterades i 2007 års företagsräkning. Datan inkluderar branscher i följande sektorer: grossist- och detaljhandel; transport och magasinering; information; finans och försäkring; fastighetsmäkleri, uthyrning och leasing; tjänster inom fria yrken, vetenskap och teknik; administrativa tjänster, support och avfallshanteringstjänster; hälsovård och socialt stöd; samt hotellverksamhet.

README som innehåller filen med detaljerad information om den här datamängden finns på platsen för den ursprungliga datamängden. Ytterligare information finns i Vanliga frågor och svar.

Den här datamängden produceras från Producer Price Indexes data som publiceras av US Bureau of Labor Statistics (BLS). I Linking and Copyright Information och Important Web Site Notices finns villkor och bestämmelser för användningen av denna datamängd.

Lagringsplats

Datamängden lagras i Azure-regionen Östra USA. Vi rekommenderar att beräkningsresurser tilldelas i Östra USA av tillhörighetsskäl.

Relaterade datamängder

Meddelanden

MICROSOFT TILLHANDAHÅLLER AZURE OPEN DATASETS I BEFINTLIGT SKICK. MICROSOFT UTFÄRDAR INTE NÅGRA GARANTIER ELLER VILLKOR, UTTRYCKLIGA ELLER UNDERFÖRSTÅDDA, AVSEENDE ANVÄNDNINGEN AV DATAMÄNGDERNA. I DEN UTSTRÄCKNING DET ÄR TILLÅTET ENLIGT NATIONELL LAGSTIFTNING, FRISKRIVER MICROSOFT SIG FRÅN ALLT ANSVAR BETRÄFFANDE SKADOR OCH FÖRLUSTER, INKLUSIVE DIREKTA SKADOR, FÖLJDSKADOR, SÄRSKILDA SKADOR, INDIREKTA SKADOR, ELLER OFÖRUTSEDDA SKADOR FRÅN ANVÄNDNINGEN AV DATAMÄNGDERNA.

Datamängden tillhandahålls enligt de ursprungliga villkor som gällde när Microsoft tog emot källdatan. Datamängden kan innehålla data från Microsoft.

Access

Available inWhen to use
Azure Notebooks

Quickly explore the dataset with Jupyter notebooks hosted on Azure or your local machine.

Azure Databricks

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Azure Synapse

Use this when you need the scale of an Azure managed Spark cluster to process the dataset.

Preview

item_code group_code series_id year period value footnote_codes seasonal series_title group_name item_name
120922 05 WPU05120922 2008 M06 100 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M07 104.6 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M08 104.4 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M09 98.3 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M10 101.5 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M11 95.2 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2008 M12 96.7 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2009 M01 104.2 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2009 M02 113.2 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
120922 05 WPU05120922 2009 M03 121 nan U PPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjusted Fuels and related products and power Prepared bituminous coal underground mine, mechanically crushed/screened/sized only
Name Data type Unique Values (sample) Description
footnote_codes string 3 nan
P

Identifierar fotnoter för dataserien. De flesta värdena är null. Se https://download.bls.gov/pub/time.series/wp/wp.footnote.

group_code string 56 02
01

Kod som identifierar den största varugrupp som omfattas av indexet. Gruppkoder och namn finns i https://download.bls.gov/pub/time.series/wp/wp.group.

group_name string 56 Processed foods and feeds
Farm products

Namn på den största varugrupp som omfattas av indexet. Gruppkoder och namn finns i https://download.bls.gov/pub/time.series/wp/wp.group.

item_code string 2,949 1
11

Identifierar det objekt som dataobservationerna avser. Objektkoder och namn finns i https://download.bls.gov/pub/time.series/wp/wp.item.

item_name string 3,410 Warehousing, storage, and related services
Passenger car rental

Objektens fullständiga namn. Objektkoder och namn finns i https://download.bls.gov/pub/time.series/wp/wp.item.

period string 13 M06
M07

Identifierar den period då data observerades. I https://download.bls.gov/pub/time.series/wp/wp.period finns en lista över periodvärden.

seasonal string 2 U
S

Kod som identifierar om datan är säsongsanpassad. S = Säsongsanpassad; U = Ej anpassad

series_id string 5,458 WPU131
WPU591

Kod som identifierar den specifika serien. En tidsserie avser en datauppsättning som observeras under en utökad tidsperiod med logiska tidsintervall. I https://download.bls.gov/pub/time.series/wp/wp.series finns mer information om serier som kod, namn, start- och slutår med mera.

series_title string 4,379 PPI Commodity data for Mining services, not seasonally adjusted
PPI Commodity data for Metal treatment services, not seasonally adjusted

Namn på den specifika serien. En tidsserie avser en datauppsättning som observeras under en utökad tidsperiod med logiska tidsintervall. I https://download.bls.gov/pub/time.series/wp/wp.series finns mer information om serier som id, namn, start- och slutår med mera.

value float 6,788 100.0
99.0999984741211

Prisindex för objekt.

year int 26 2018
2017

Identifierar observationsåret.

Select your preferred service:

Azure Notebooks

Azure Databricks

Azure Synapse

Azure Notebooks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import UsLaborPPICommodity

labor = UsLaborPPICommodity()
labor_df = labor.to_pandas_dataframe()
ActivityStarted, to_pandas_dataframe
ActivityStarted, to_pandas_dataframe_in_worker
Looking for parquet files...
Reading them into Pandas dataframe...
Reading ppi_commodity/part-00000-tid-160579496407747812-077bf440-b39a-4520-9373-0a3f021dd59e-5654-1-c000.snappy.parquet under container laborstatisticscontainer
Done.
ActivityCompleted: Activity=to_pandas_dataframe_in_worker, HowEnded=Success, Duration=20409.23 [ms]
ActivityCompleted: Activity=to_pandas_dataframe, HowEnded=Success, Duration=20434.79 [ms]
In [2]:
labor_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6825676 entries, 0 to 6825675
Data columns (total 11 columns):
item_code         object
group_code        object
series_id         object
year              int32
period            object
value             float32
footnote_codes    object
seasonal          object
series_title      object
group_name        object
item_name         object
dtypes: float32(1), int32(1), object(9)
memory usage: 520.8+ MB
In [1]:
# Pip install packages
import os, sys

!{sys.executable} -m pip install azure-storage-blob
!{sys.executable} -m pip install pyarrow
!{sys.executable} -m pip install pandas
In [2]:
# Azure storage access info
azure_storage_account_name = "azureopendatastorage"
azure_storage_sas_token = r""
container_name = "laborstatisticscontainer"
folder_name = "ppi_commodity/"
In [3]:
from azure.storage.blob import BlockBlobServicefrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient

if azure_storage_account_name is None or azure_storage_sas_token is None:
    raise Exception(
        "Provide your specific name and key for your Azure Storage account--see the Prerequisites section earlier.")

print('Looking for the first parquet under the folder ' +
      folder_name + ' in container "' + container_name + '"...')
container_url = f"https://{azure_storage_account_name}.blob.core.windows.net/"
blob_service_client = BlobServiceClient(
    container_url, azure_storage_sas_token if azure_storage_sas_token else None)

container_client = blob_service_client.get_container_client(container_name)
blobs = container_client.list_blobs(folder_name)
sorted_blobs = sorted(list(blobs), key=lambda e: e.name, reverse=True)
targetBlobName = ''
for blob in sorted_blobs:
    if blob.name.startswith(folder_name) and blob.name.endswith('.parquet'):
        targetBlobName = blob.name
        break

print('Target blob to download: ' + targetBlobName)
_, filename = os.path.split(targetBlobName)
blob_client = container_client.get_blob_client(targetBlobName)
with open(filename, 'wb') as local_file:
    blob_client.download_blob().download_to_stream(local_file)
In [4]:
# Read the parquet file into Pandas data frame
import pandas as pd

print('Reading the parquet file into Pandas data frame')
df = pd.read_parquet(filename)
In [5]:
# you can add your filter at below
print('Loaded as a Pandas data frame: ')
df
In [6]:
 

Azure Databricks

Package: Language: Python Python
In [1]:
# This is a package in preview.
from azureml.opendatasets import UsLaborPPICommodity

labor = UsLaborPPICommodity()
labor_df = labor.to_spark_dataframe()
ActivityStarted, to_spark_dataframe ActivityStarted, to_spark_dataframe_in_worker ActivityCompleted: Activity=to_spark_dataframe_in_worker, HowEnded=Success, Duration=2871.21 [ms] ActivityCompleted: Activity=to_spark_dataframe, HowEnded=Success, Duration=2875.06 [ms]
In [2]:
display(labor_df.limit(5))
item_codegroup_codeseries_idyearperiodvaluefootnote_codesseasonalseries_titlegroup_nameitem_name
12092205WPU05120922 2008M06100.0nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M07104.6nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M08104.4nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M0998.3nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
12092205WPU05120922 2008M10101.5nanUPPI Commodity data for Fuels and related products and power-Prepared bituminous coal underground mine, mechanically crushed/screened/sized only, not seasonally adjustedFuels and related products and powerPrepared bituminous coal underground mine, mechanically crushed/screened/sized only
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "laborstatisticscontainer"
blob_relative_path = "ppi_commodity/"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))

Azure Synapse

Package: Language: Python
In [1]:
# Azure storage access info
blob_account_name = "azureopendatastorage"
blob_container_name = "laborstatisticscontainer"
blob_relative_path = "ppi_commodity/"
blob_sas_token = r""
In [2]:
# Allow SPARK to read from Blob remotely
wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
spark.conf.set(
  'fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),
  blob_sas_token)
print('Remote blob path: ' + wasbs_path)
In [3]:
# SPARK read parquet, note that it won't load any data yet by now
df = spark.read.parquet(wasbs_path)
print('Register the DataFrame as a SQL temporary view: source')
df.createOrReplaceTempView('source')
In [4]:
# Display top 10 rows
print('Displaying top 10 rows: ')
display(spark.sql('SELECT * FROM source LIMIT 10'))