In December 2019 my InfoWorld colleague Sharon Machlis wrote an article called “How to merge data in R using R merge, dplyr, or data.table.” Sharon is a whiz at R programming, and analytics in general, which comes through in the article. But suppose you’re planning on doing machine learning or deep learning on the data using Python and (for example) Scikit-learn, PyTorch, or TensorFlow? While it’s possible to pass data from R to Python, it’s not the best solution to the problem.
In this article, I’ll discuss how to accomplish data merging natively in Python, which will make it easy to pass the in-memory merged dataset on to one of the Python AI frameworks. I’ll use the same public datasets as Sharon did, which record US airline flight delays, but I’ll stick to the best framework for the purpose, which is currently Pandas.
[ Tune into Serdar Yegulalp’s Smart Python video tutorials to learn smart Python tricks in 5 minutes or less ]
Pandas and the DataFrame
Pandas is a BSD-licensed open source library that provides high-performance, easy-to-use data structures and data analysis tools for Python. The R dataframe data structure has a close equivalent in Pandas, the DataFrame. As the Pandas project describes it:
DataFrame is a two-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used Pandas object.
The Pandas method for joining two DataFrame objects is
merge(), which is the single entry point for all standard database join operations between DataFrame or named Series objects. (Series objects are one-dimensional.) The
merge method specification is:
pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left and right DataFrame and/or Series objects. If not passed and
False, the intersection of the columns in the DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as keys. Can either be column names, index level names, or arrays with length equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as keys. Can either be column names, index level names, or arrays with length equal to the length of the DataFrame or Series.
True, use the index (row labels) from the left DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex (hierarchical), the number of levels must match the number of join keys from the right DataFrame or Series.
right_index: Same usage as
left_indexfor the right DataFrame or Series
how: One of
inner. Defaults to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical order. Defaults to
True; setting to
Falsewill improve performance substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping columns. Defaults to
copy: Always copy data (default
True) from the passed DataFrame or named Series objects, even when reindexing is not necessary. Cannot be avoided in many cases but may improve performance and memory usage. The cases where copying can be avoided are somewhat pathological but this option is provided nonetheless.
indicator: Add a column to the output DataFrame called
_mergewith information on the source of each row.
_mergeis Categorical-type and takes on a value of
left_onlyfor observations whose merge key only appears in
leftDataFrame or Series,
right_onlyfor observations whose merge key only appears in
rightDataFrame or Series, and both if the observation’s merge key is found in both.
validate: string, default
None. If specified, checks if merge is of specified type.
1:1: checks if merge keys are unique in both left and right datasets.
1:m: checks if merge keys are unique in left dataset.
m:1: checks if merge keys are unique in right dataset.
m:m: allowed, but does not result in checks.
Read a CSV file in Pandas
As you might expect, Pandas has a method for reading CSV files,
pd.read_csv(), which returns a DataFrame. It has many optional arguments, but for our purposes only the basic arguments are required:
filepath_or_buffer : various
Either a path to a file (a str, pathlib.Path, or py._path.local.LocalPath), URL (including HTTP, FTP, and Amazon S3 locations), or any object with a
read() method (such as an open file or StringIO).
Sep : str, defaults to
',' for r
Delimiter to use. If sep is
None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s built-in sniffer tool, csv.Sniffer. In addition, separators longer than one character and different from
's+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example:
delim_whitespace : boolean, default
Specifies whether or not whitespace (e.g.
' ' or
't') will be used as the delimiter. Equivalent to setting
sep='s+'. If this option is set to
True, nothing should be passed in for the delimiter parameter.
US Flight Delays dataset
Sharon used two tables from the US Bureau of Transportation Statistics Flight Delays dataset, the actual flight delays data and the lookup table for the reporting airline. To make this comparable to Sharon’s analysis, let’s download her files, found here.
The zip file you’ll get also includes Sharon’s R code and a PowerPoint presentation. Go ahead and unzip the file rmerges3ways.zip. Then unzip the file 673598238_T_ONTIME_REPORTING.zip to yield 673598238_T_ONTIME_REPORTING.csv. We’ll use that CSV file along with L_UNIQUE_CARRIERS.csv_. Note the trailing underscore in the latter filename. The file is actually in a valid comma-separated variables format; the USBoTS site adds the underscore when it generates lookup tables.
Looking at the first 20 lines of the two CSV files in a text editor (below), we see that both have header rows and do use commas as separators. That implies that the defaults of
read_csv() will be fine; we just need to specify the file paths.
“05Q”,”Comlux Aviation, AG”
“06Q”,”Master Top Linhas Aereas Ltd.”
“07Q”,”Flair Airlines Ltd.”
“09Q”,”Swift Air, LLC d/b/a Eastern Air Lines d/b/a Eastern”
“0CQ”,”ACM AIR CHARTER GmbH”
“0FQ”,”Maine Aviation Aircraft Charter, LLC”
“0GQ”,”Inter Island Airways, d/b/a Inter Island Air”
“0HQ”,”Polar Airlines de Mexico d/b/a Nova Air”
“0LQ”,”Metropix UK, LLP.”
“0MQ”,”Multi-Aero, Inc. d/b/a Air Choice One”
“0Q”,”Flying Service N.V.”
“0QQ”,”TAG Aviation (UK) Ltd.”
“0RQ”,”TAG Aviation Espana S.L.”
Install Pandas and its dependencies
Unless you want to check out Pandas from GitHub and compile it yourself, you can install Pandas via the Pip package installer:
pip install pandas
Alternatively, if you are running the Anaconda Python distribution, you can install Pandas via Conda:
conda install pandas
Because I have both Python 2.7 and Python 3.7 installed and I want to use the latter, I used
pip3 install pandas, and discovered that I already had Pandas on my machine. I did upgrade it, though, as the version I had was kind of old.
[email protected] ~ % pip3 install pandas
Requirement already satisfied: pandas in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (0.24.2)
Requirement already satisfied: pytz>=2011k in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (2019.1)
Requirement already satisfied: python-dateutil>=2.5.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (2.7.5)
Requirement already satisfied: numpy>=1.12.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (1.16.3)
Requirement already satisfied: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from python-dateutil>=2.5.0->pandas) (1.11.0)
[email protected] ~ % pip3 install --upgrade pandas
Downloading pandas-1.1.4-cp37-cp37m-macosx_10_9_x86_64.whl (9.9 MB)
|████████████████████████████████| 9.9 MB 4.2 MB/s
Requirement already satisfied, skipping upgrade: python-dateutil>=2.7.3 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (2.7.5)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (2019.1)
Requirement already satisfied, skipping upgrade: numpy>=1.15.4 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas) (1.16.3)
Requirement already satisfied, skipping upgrade: six>=1.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas) (1.11.0)
Installing collected packages: pandas
Attempting uninstall: pandas
Found existing installation: pandas 0.24.2
Successfully uninstalled pandas-0.24.2
Successfully installed pandas-1.1.4
Python code to merge data
At this point, the code pretty much writes itself.
import pandas as pd
mydf = pd.read_csv("673598238_T_ONTIME_REPORTING.csv")
mylookup = pd.read_csv("L_UNIQUE_CARRIERS.csv_")
joined_df = pd.merge(mydf, mylookup, left_on= "OP_UNIQUE_CARRIER", right_on="Code")
If we run these lines in the Python 3 console, we get: