10 Minutes to pandas

This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the Cookbook.

Customarily, we import as follows:

  1. In [1]: import numpy as np
  2. In [2]: import pandas as pd

Object Creation

See the Data Structure Intro section.

Creating a Series by passing a list of values, letting pandas create a default integer index:

  1. In [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8])
  2. In [4]: s
  3. Out[4]:
  4. 0 1.0
  5. 1 3.0
  6. 2 5.0
  7. 3 NaN
  8. 4 6.0
  9. 5 8.0
  10. dtype: float64

Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:

  1. In [5]: dates = pd.date_range('20130101', periods=6)
  2. In [6]: dates
  3. Out[6]:
  4. DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
  5. '2013-01-05', '2013-01-06'],
  6. dtype='datetime64[ns]', freq='D')
  7. In [7]: df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
  8. In [8]: df
  9. Out[8]:
  10. A B C D
  11. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  12. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  13. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  14. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  15. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  16. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

  1. In [9]: df2 = pd.DataFrame({'A': 1.,
  2. ...: 'B': pd.Timestamp('20130102'),
  3. ...: 'C': pd.Series(1, index=list(range(4)), dtype='float32'),
  4. ...: 'D': np.array([3] * 4, dtype='int32'),
  5. ...: 'E': pd.Categorical(["test", "train", "test", "train"]),
  6. ...: 'F': 'foo'})
  7. ...:
  8. In [10]: df2
  9. Out[10]:
  10. A B C D E F
  11. 0 1.0 2013-01-02 1.0 3 test foo
  12. 1 1.0 2013-01-02 1.0 3 train foo
  13. 2 1.0 2013-01-02 1.0 3 test foo
  14. 3 1.0 2013-01-02 1.0 3 train foo

The columns of the resulting DataFrame have different dtypes.

  1. In [11]: df2.dtypes
  2. Out[11]:
  3. A float64
  4. B datetime64[ns]
  5. C float32
  6. D int32
  7. E category
  8. F object
  9. dtype: object

If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:

  1. In [12]: df2.<TAB> # noqa: E225, E999
  2. df2.A df2.bool
  3. df2.abs df2.boxplot
  4. df2.add df2.C
  5. df2.add_prefix df2.clip
  6. df2.add_suffix df2.clip_lower
  7. df2.align df2.clip_upper
  8. df2.all df2.columns
  9. df2.any df2.combine
  10. df2.append df2.combine_first
  11. df2.apply df2.compound
  12. df2.applymap df2.consolidate
  13. df2.D

As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.

Viewing Data

See the Basics section.

Here is how to view the top and bottom rows of the frame:

  1. In [13]: df.head()
  2. Out[13]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  7. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  8. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  9. In [14]: df.tail(3)
  10. Out[14]:
  11. A B C D
  12. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  13. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  14. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988

Display the index, columns:

  1. In [15]: df.index
  2. Out[15]:
  3. DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
  4. '2013-01-05', '2013-01-06'],
  5. dtype='datetime64[ns]', freq='D')
  6. In [16]: df.columns
  7. Out[16]: Index(['A', 'B', 'C', 'D'], dtype='object')

DataFrame.to_numpy() gives a NumPy representation of the underlying data. Note that his can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between pandas and NumPy: NumPy arrays have one dtype for the entire array, while pandas DataFrames have one dtype per column. When you call DataFrame.to_numpy(), pandas will find the NumPy dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object.

For df, our DataFrame of all floating-point values, DataFrame.to_numpy() is fast and doesn’t require copying data.

  1. In [17]: df.to_numpy()
  2. Out[17]:
  3. array([[ 0.4691, -0.2829, -1.5091, -1.1356],
  4. [ 1.2121, -0.1732, 0.1192, -1.0442],
  5. [-0.8618, -2.1046, -0.4949, 1.0718],
  6. [ 0.7216, -0.7068, -1.0396, 0.2719],
  7. [-0.425 , 0.567 , 0.2762, -1.0874],
  8. [-0.6737, 0.1136, -1.4784, 0.525 ]])

For df2, the DataFrame with multiple dtypes, DataFrame.to_numpy() is relatively expensive.

  1. In [18]: df2.to_numpy()
  2. Out[18]:
  3. array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
  4. [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo'],
  5. [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
  6. [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo']], dtype=object)

::: tip Note DataFrame.to_numpy() does not include the index or column labels in the output. :::

describe() shows a quick statistic summary of your data:

  1. In [19]: df.describe()
  2. Out[19]:
  3. A B C D
  4. count 6.000000 6.000000 6.000000 6.000000
  5. mean 0.073711 -0.431125 -0.687758 -0.233103
  6. std 0.843157 0.922818 0.779887 0.973118
  7. min -0.861849 -2.104569 -1.509059 -1.135632
  8. 25% -0.611510 -0.600794 -1.368714 -1.076610
  9. 50% 0.022070 -0.228039 -0.767252 -0.386188
  10. 75% 0.658444 0.041933 -0.034326 0.461706
  11. max 1.212112 0.567020 0.276232 1.071804

Transposing your data:

  1. In [20]: df.T
  2. Out[20]:
  3. 2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
  4. A 0.469112 1.212112 -0.861849 0.721555 -0.424972 -0.673690
  5. B -0.282863 -0.173215 -2.104569 -0.706771 0.567020 0.113648
  6. C -1.509059 0.119209 -0.494929 -1.039575 0.276232 -1.478427
  7. D -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988

Sorting by an axis:

  1. In [21]: df.sort_index(axis=1, ascending=False)
  2. Out[21]:
  3. D C B A
  4. 2013-01-01 -1.135632 -1.509059 -0.282863 0.469112
  5. 2013-01-02 -1.044236 0.119209 -0.173215 1.212112
  6. 2013-01-03 1.071804 -0.494929 -2.104569 -0.861849
  7. 2013-01-04 0.271860 -1.039575 -0.706771 0.721555
  8. 2013-01-05 -1.087401 0.276232 0.567020 -0.424972
  9. 2013-01-06 0.524988 -1.478427 0.113648 -0.673690

Sorting by values:

  1. In [22]: df.sort_values(by='B')
  2. Out[22]:
  3. A B C D
  4. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  5. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  6. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  7. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  8. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988
  9. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401

Selection

::: tip Note While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc and .iloc. :::

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.

Getting

Selecting a single column, which yields a Series, equivalent to df.A:

  1. In [23]: df['A']
  2. Out[23]:
  3. 2013-01-01 0.469112
  4. 2013-01-02 1.212112
  5. 2013-01-03 -0.861849
  6. 2013-01-04 0.721555
  7. 2013-01-05 -0.424972
  8. 2013-01-06 -0.673690
  9. Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

  1. In [24]: df[0:3]
  2. Out[24]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  7. In [25]: df['20130102':'20130104']
  8. Out[25]:
  9. A B C D
  10. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  11. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  12. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860

Selection by Label

See more in Selection by Label.

For getting a cross section using a label:

  1. In [26]: df.loc[dates[0]]
  2. Out[26]:
  3. A 0.469112
  4. B -0.282863
  5. C -1.509059
  6. D -1.135632
  7. Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

  1. In [27]: df.loc[:, ['A', 'B']]
  2. Out[27]:
  3. A B
  4. 2013-01-01 0.469112 -0.282863
  5. 2013-01-02 1.212112 -0.173215
  6. 2013-01-03 -0.861849 -2.104569
  7. 2013-01-04 0.721555 -0.706771
  8. 2013-01-05 -0.424972 0.567020
  9. 2013-01-06 -0.673690 0.113648

Showing label slicing, both endpoints are included:

  1. In [28]: df.loc['20130102':'20130104', ['A', 'B']]
  2. Out[28]:
  3. A B
  4. 2013-01-02 1.212112 -0.173215
  5. 2013-01-03 -0.861849 -2.104569
  6. 2013-01-04 0.721555 -0.706771

Reduction in the dimensions of the returned object:

  1. In [29]: df.loc['20130102', ['A', 'B']]
  2. Out[29]:
  3. A 1.212112
  4. B -0.173215
  5. Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:

  1. In [30]: df.loc[dates[0], 'A']
  2. Out[30]: 0.46911229990718628

For getting fast access to a scalar (equivalent to the prior method):

  1. In [31]: df.at[dates[0], 'A']
  2. Out[31]: 0.46911229990718628

Selection by Position

See more in Selection by Position.

Select via the position of the passed integers:

  1. In [32]: df.iloc[3]
  2. Out[32]:
  3. A 0.721555
  4. B -0.706771
  5. C -1.039575
  6. D 0.271860
  7. Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:

  1. In [33]: df.iloc[3:5, 0:2]
  2. Out[33]:
  3. A B
  4. 2013-01-04 0.721555 -0.706771
  5. 2013-01-05 -0.424972 0.567020

By lists of integer position locations, similar to the numpy/python style:

  1. In [34]: df.iloc[[1, 2, 4], [0, 2]]
  2. Out[34]:
  3. A C
  4. 2013-01-02 1.212112 0.119209
  5. 2013-01-03 -0.861849 -0.494929
  6. 2013-01-05 -0.424972 0.276232

For slicing rows explicitly:

  1. In [35]: df.iloc[1:3, :]
  2. Out[35]:
  3. A B C D
  4. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  5. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804

For slicing columns explicitly:

  1. In [36]: df.iloc[:, 1:3]
  2. Out[36]:
  3. B C
  4. 2013-01-01 -0.282863 -1.509059
  5. 2013-01-02 -0.173215 0.119209
  6. 2013-01-03 -2.104569 -0.494929
  7. 2013-01-04 -0.706771 -1.039575
  8. 2013-01-05 0.567020 0.276232
  9. 2013-01-06 0.113648 -1.478427

For getting a value explicitly:

  1. In [37]: df.iloc[1, 1]
  2. Out[37]: -0.17321464905330858

For getting fast access to a scalar (equivalent to the prior method):

  1. In [38]: df.iat[1, 1]
  2. Out[38]: -0.17321464905330858

Boolean Indexing

Using a single column’s values to select data.

  1. In [39]: df[df.A > 0]
  2. Out[39]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860

Selecting values from a DataFrame where a boolean condition is met.

  1. In [40]: df[df > 0]
  2. Out[40]:
  3. A B C D
  4. 2013-01-01 0.469112 NaN NaN NaN
  5. 2013-01-02 1.212112 NaN 0.119209 NaN
  6. 2013-01-03 NaN NaN NaN 1.071804
  7. 2013-01-04 0.721555 NaN NaN 0.271860
  8. 2013-01-05 NaN 0.567020 0.276232 NaN
  9. 2013-01-06 NaN 0.113648 NaN 0.524988

Using the isin() method for filtering:

  1. In [41]: df2 = df.copy()
  2. In [42]: df2['E'] = ['one', 'one', 'two', 'three', 'four', 'three']
  3. In [43]: df2
  4. Out[43]:
  5. A B C D E
  6. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 one
  7. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236 one
  8. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
  9. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860 three
  10. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
  11. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988 three
  12. In [44]: df2[df2['E'].isin(['two', 'four'])]
  13. Out[44]:
  14. A B C D E
  15. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
  16. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four

Setting

Setting a new column automatically aligns the data by the indexes.

  1. In [45]: s1 = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range('20130102', periods=6))
  2. In [46]: s1
  3. Out[46]:
  4. 2013-01-02 1
  5. 2013-01-03 2
  6. 2013-01-04 3
  7. 2013-01-05 4
  8. 2013-01-06 5
  9. 2013-01-07 6
  10. Freq: D, dtype: int64
  11. In [47]: df['F'] = s1

Setting values by label:

  1. In [48]: df.at[dates[0], 'A'] = 0

Setting values by position:

  1. In [49]: df.iat[0, 1] = 0

Setting by assigning with a NumPy array:

  1. In [50]: df.loc[:, 'D'] = np.array([5] * len(df))

The result of the prior setting operations.

  1. In [51]: df
  2. Out[51]:
  3. A B C D F
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN
  5. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0
  7. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0
  8. 2013-01-05 -0.424972 0.567020 0.276232 5 4.0
  9. 2013-01-06 -0.673690 0.113648 -1.478427 5 5.0

A where operation with setting.

  1. In [52]: df2 = df.copy()
  2. In [53]: df2[df2 > 0] = -df2
  3. In [54]: df2
  4. Out[54]:
  5. A B C D F
  6. 2013-01-01 0.000000 0.000000 -1.509059 -5 NaN
  7. 2013-01-02 -1.212112 -0.173215 -0.119209 -5 -1.0
  8. 2013-01-03 -0.861849 -2.104569 -0.494929 -5 -2.0
  9. 2013-01-04 -0.721555 -0.706771 -1.039575 -5 -3.0
  10. 2013-01-05 -0.424972 -0.567020 -0.276232 -5 -4.0
  11. 2013-01-06 -0.673690 -0.113648 -1.478427 -5 -5.0

Missing Data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section.

Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.

  1. In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
  2. In [56]: df1.loc[dates[0]:dates[1], 'E'] = 1
  3. In [57]: df1
  4. Out[57]:
  5. A B C D F E
  6. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN 1.0
  7. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
  8. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 NaN
  9. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 NaN

To drop any rows that have missing data.

  1. In [58]: df1.dropna(how='any')
  2. Out[58]:
  3. A B C D F E
  4. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0

Filling missing data.

  1. In [59]: df1.fillna(value=5)
  2. Out[59]:
  3. A B C D F E
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 5.0 1.0
  5. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 5.0
  7. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 5.0

To get the boolean mask where values are nan.

  1. In [60]: pd.isna(df1)
  2. Out[60]:
  3. A B C D F E
  4. 2013-01-01 False False False False True False
  5. 2013-01-02 False False False False False False
  6. 2013-01-03 False False False False False True
  7. 2013-01-04 False False False False False True

Operations

See the Basic section on Binary Ops.

Stats

Operations in general exclude missing data.

Performing a descriptive statistic:

  1. In [61]: df.mean()
  2. Out[61]:
  3. A -0.004474
  4. B -0.383981
  5. C -0.687758
  6. D 5.000000
  7. F 3.000000
  8. dtype: float64

Same operation on the other axis:

  1. In [62]: df.mean(1)
  2. Out[62]:
  3. 2013-01-01 0.872735
  4. 2013-01-02 1.431621
  5. 2013-01-03 0.707731
  6. 2013-01-04 1.395042
  7. 2013-01-05 1.883656
  8. 2013-01-06 1.592306
  9. Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.

  1. In [63]: s = pd.Series([1, 3, 5, np.nan, 6, 8], index=dates).shift(2)
  2. In [64]: s
  3. Out[64]:
  4. 2013-01-01 NaN
  5. 2013-01-02 NaN
  6. 2013-01-03 1.0
  7. 2013-01-04 3.0
  8. 2013-01-05 5.0
  9. 2013-01-06 NaN
  10. Freq: D, dtype: float64
  11. In [65]: df.sub(s, axis='index')
  12. Out[65]:
  13. A B C D F
  14. 2013-01-01 NaN NaN NaN NaN NaN
  15. 2013-01-02 NaN NaN NaN NaN NaN
  16. 2013-01-03 -1.861849 -3.104569 -1.494929 4.0 1.0
  17. 2013-01-04 -2.278445 -3.706771 -4.039575 2.0 0.0
  18. 2013-01-05 -5.424972 -4.432980 -4.723768 0.0 -1.0
  19. 2013-01-06 NaN NaN NaN NaN NaN

Apply

Applying functions to the data:

  1. In [66]: df.apply(np.cumsum)
  2. Out[66]:
  3. A B C D F
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN
  5. 2013-01-02 1.212112 -0.173215 -1.389850 10 1.0
  6. 2013-01-03 0.350263 -2.277784 -1.884779 15 3.0
  7. 2013-01-04 1.071818 -2.984555 -2.924354 20 6.0
  8. 2013-01-05 0.646846 -2.417535 -2.648122 25 10.0
  9. 2013-01-06 -0.026844 -2.303886 -4.126549 30 15.0
  10. In [67]: df.apply(lambda x: x.max() - x.min())
  11. Out[67]:
  12. A 2.073961
  13. B 2.671590
  14. C 1.785291
  15. D 0.000000
  16. F 4.000000
  17. dtype: float64

Histogramming

See more at Histogramming and Discretization.

  1. In [68]: s = pd.Series(np.random.randint(0, 7, size=10))
  2. In [69]: s
  3. Out[69]:
  4. 0 4
  5. 1 2
  6. 2 1
  7. 3 2
  8. 4 6
  9. 5 4
  10. 6 4
  11. 7 6
  12. 8 4
  13. 9 4
  14. dtype: int64
  15. In [70]: s.value_counts()
  16. Out[70]:
  17. 4 5
  18. 6 2
  19. 2 2
  20. 1 1
  21. dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

  1. In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
  2. In [72]: s.str.lower()
  3. Out[72]:
  4. 0 a
  5. 1 b
  6. 2 c
  7. 3 aaba
  8. 4 baca
  9. 5 NaN
  10. 6 caba
  11. 7 dog
  12. 8 cat
  13. dtype: object

Merge

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

See the Merging section.

Concatenating pandas objects together with concat():

  1. In [73]: df = pd.DataFrame(np.random.randn(10, 4))
  2. In [74]: df
  3. Out[74]:
  4. 0 1 2 3
  5. 0 -0.548702 1.467327 -1.015962 -0.483075
  6. 1 1.637550 -1.217659 -0.291519 -1.745505
  7. 2 -0.263952 0.991460 -0.919069 0.266046
  8. 3 -0.709661 1.669052 1.037882 -1.705775
  9. 4 -0.919854 -0.042379 1.247642 -0.009920
  10. 5 0.290213 0.495767 0.362949 1.548106
  11. 6 -1.131345 -0.089329 0.337863 -0.945867
  12. 7 -0.932132 1.956030 0.017587 -0.016692
  13. 8 -0.575247 0.254161 -1.143704 0.215897
  14. 9 1.193555 -0.077118 -0.408530 -0.862495
  15. # break it into pieces
  16. In [75]: pieces = [df[:3], df[3:7], df[7:]]
  17. In [76]: pd.concat(pieces)
  18. Out[76]:
  19. 0 1 2 3
  20. 0 -0.548702 1.467327 -1.015962 -0.483075
  21. 1 1.637550 -1.217659 -0.291519 -1.745505
  22. 2 -0.263952 0.991460 -0.919069 0.266046
  23. 3 -0.709661 1.669052 1.037882 -1.705775
  24. 4 -0.919854 -0.042379 1.247642 -0.009920
  25. 5 0.290213 0.495767 0.362949 1.548106
  26. 6 -1.131345 -0.089329 0.337863 -0.945867
  27. 7 -0.932132 1.956030 0.017587 -0.016692
  28. 8 -0.575247 0.254161 -1.143704 0.215897
  29. 9 1.193555 -0.077118 -0.408530 -0.862495

Join

SQL style merges. See the Database style joining section.

  1. In [77]: left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
  2. In [78]: right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
  3. In [79]: left
  4. Out[79]:
  5. key lval
  6. 0 foo 1
  7. 1 foo 2
  8. In [80]: right
  9. Out[80]:
  10. key rval
  11. 0 foo 4
  12. 1 foo 5
  13. In [81]: pd.merge(left, right, on='key')
  14. Out[81]:
  15. key lval rval
  16. 0 foo 1 4
  17. 1 foo 1 5
  18. 2 foo 2 4
  19. 3 foo 2 5

Another example that can be given is:

  1. In [82]: left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
  2. In [83]: right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
  3. In [84]: left
  4. Out[84]:
  5. key lval
  6. 0 foo 1
  7. 1 bar 2
  8. In [85]: right
  9. Out[85]:
  10. key rval
  11. 0 foo 4
  12. 1 bar 5
  13. In [86]: pd.merge(left, right, on='key')
  14. Out[86]:
  15. key lval rval
  16. 0 foo 1 4
  17. 1 bar 2 5

Append

Append rows to a dataframe. See the Appending section.

  1. In [87]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
  2. In [88]: df
  3. Out[88]:
  4. A B C D
  5. 0 1.346061 1.511763 1.627081 -0.990582
  6. 1 -0.441652 1.211526 0.268520 0.024580
  7. 2 -1.577585 0.396823 -0.105381 -0.532532
  8. 3 1.453749 1.208843 -0.080952 -0.264610
  9. 4 -0.727965 -0.589346 0.339969 -0.693205
  10. 5 -0.339355 0.593616 0.884345 1.591431
  11. 6 0.141809 0.220390 0.435589 0.192451
  12. 7 -0.096701 0.803351 1.715071 -0.708758
  13. In [89]: s = df.iloc[3]
  14. In [90]: df.append(s, ignore_index=True)
  15. Out[90]:
  16. A B C D
  17. 0 1.346061 1.511763 1.627081 -0.990582
  18. 1 -0.441652 1.211526 0.268520 0.024580
  19. 2 -1.577585 0.396823 -0.105381 -0.532532
  20. 3 1.453749 1.208843 -0.080952 -0.264610
  21. 4 -0.727965 -0.589346 0.339969 -0.693205
  22. 5 -0.339355 0.593616 0.884345 1.591431
  23. 6 0.141809 0.220390 0.435589 0.192451
  24. 7 -0.096701 0.803351 1.715071 -0.708758
  25. 8 1.453749 1.208843 -0.080952 -0.264610

Grouping

By “group by” we are referring to a process involving one or more of the following steps:

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure

See the Grouping section.

  1. In [91]: df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
  2. ....: 'foo', 'bar', 'foo', 'foo'],
  3. ....: 'B': ['one', 'one', 'two', 'three',
  4. ....: 'two', 'two', 'one', 'three'],
  5. ....: 'C': np.random.randn(8),
  6. ....: 'D': np.random.randn(8)})
  7. ....:
  8. In [92]: df
  9. Out[92]:
  10. A B C D
  11. 0 foo one -1.202872 -0.055224
  12. 1 bar one -1.814470 2.395985
  13. 2 foo two 1.018601 1.552825
  14. 3 bar three -0.595447 0.166599
  15. 4 foo two 1.395433 0.047609
  16. 5 bar two -0.392670 -0.136473
  17. 6 foo one 0.007207 -0.561757
  18. 7 foo three 1.928123 -1.623033

Grouping and then applying the sum() function to the resulting groups.

  1. In [93]: df.groupby('A').sum()
  2. Out[93]:
  3. C D
  4. A
  5. bar -2.802588 2.42611
  6. foo 3.146492 -0.63958

Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.

  1. In [94]: df.groupby(['A', 'B']).sum()
  2. Out[94]:
  3. C D
  4. A B
  5. bar one -1.814470 2.395985
  6. three -0.595447 0.166599
  7. two -0.392670 -0.136473
  8. foo one -1.195665 -0.616981
  9. three 1.928123 -1.623033
  10. two 2.414034 1.600434

Reshaping

See the sections on Hierarchical Indexing and Reshaping.

Stack

  1. In [95]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
  2. ....: 'foo', 'foo', 'qux', 'qux'],
  3. ....: ['one', 'two', 'one', 'two',
  4. ....: 'one', 'two', 'one', 'two']]))
  5. ....:
  6. In [96]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
  7. In [97]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
  8. In [98]: df2 = df[:4]
  9. In [99]: df2
  10. Out[99]:
  11. A B
  12. first second
  13. bar one 0.029399 -0.542108
  14. two 0.282696 -0.087302
  15. baz one -1.575170 1.771208
  16. two 0.816482 1.100230

The stack() method “compresses” a level in the DataFrame’s columns.

  1. In [100]: stacked = df2.stack()
  2. In [101]: stacked
  3. Out[101]:
  4. first second
  5. bar one A 0.029399
  6. B -0.542108
  7. two A 0.282696
  8. B -0.087302
  9. baz one A -1.575170
  10. B 1.771208
  11. two A 0.816482
  12. B 1.100230
  13. dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:

  1. In [102]: stacked.unstack()
  2. Out[102]:
  3. A B
  4. first second
  5. bar one 0.029399 -0.542108
  6. two 0.282696 -0.087302
  7. baz one -1.575170 1.771208
  8. two 0.816482 1.100230
  9. In [103]: stacked.unstack(1)
  10. Out[103]:
  11. second one two
  12. first
  13. bar A 0.029399 0.282696
  14. B -0.542108 -0.087302
  15. baz A -1.575170 0.816482
  16. B 1.771208 1.100230
  17. In [104]: stacked.unstack(0)
  18. Out[104]:
  19. first bar baz
  20. second
  21. one A 0.029399 -1.575170
  22. B -0.542108 1.771208
  23. two A 0.282696 0.816482
  24. B -0.087302 1.100230

Pivot Tables

See the section on Pivot Tables.

  1. In [105]: df = pd.DataFrame({'A': ['one', 'one', 'two', 'three'] * 3,
  2. .....: 'B': ['A', 'B', 'C'] * 4,
  3. .....: 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
  4. .....: 'D': np.random.randn(12),
  5. .....: 'E': np.random.randn(12)})
  6. .....:
  7. In [106]: df
  8. Out[106]:
  9. A B C D E
  10. 0 one A foo 1.418757 -0.179666
  11. 1 one B foo -1.879024 1.291836
  12. 2 two C foo 0.536826 -0.009614
  13. 3 three A bar 1.006160 0.392149
  14. 4 one B bar -0.029716 0.264599
  15. 5 one C bar -1.146178 -0.057409
  16. 6 two A foo 0.100900 -1.425638
  17. 7 three B foo -1.035018 1.024098
  18. 8 one C foo 0.314665 -0.106062
  19. 9 one A bar -0.773723 1.824375
  20. 10 two B bar -1.170653 0.595974
  21. 11 three C bar 0.648740 1.167115

We can produce pivot tables from this data very easily:

  1. In [107]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
  2. Out[107]:
  3. C bar foo
  4. A B
  5. one A -0.773723 1.418757
  6. B -0.029716 -1.879024
  7. C -1.146178 0.314665
  8. three A 1.006160 NaN
  9. B NaN -1.035018
  10. C 0.648740 NaN
  11. two A NaN 0.100900
  12. B -1.170653 NaN
  13. C NaN 0.536826

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the Time Series section.

  1. In [108]: rng = pd.date_range('1/1/2012', periods=100, freq='S')
  2. In [109]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
  3. In [110]: ts.resample('5Min').sum()
  4. Out[110]:
  5. 2012-01-01 25083
  6. Freq: 5T, dtype: int64

Time zone representation:

  1. In [111]: rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
  2. In [112]: ts = pd.Series(np.random.randn(len(rng)), rng)
  3. In [113]: ts
  4. Out[113]:
  5. 2012-03-06 0.464000
  6. 2012-03-07 0.227371
  7. 2012-03-08 -0.496922
  8. 2012-03-09 0.306389
  9. 2012-03-10 -2.290613
  10. Freq: D, dtype: float64
  11. In [114]: ts_utc = ts.tz_localize('UTC')
  12. In [115]: ts_utc
  13. Out[115]:
  14. 2012-03-06 00:00:00+00:00 0.464000
  15. 2012-03-07 00:00:00+00:00 0.227371
  16. 2012-03-08 00:00:00+00:00 -0.496922
  17. 2012-03-09 00:00:00+00:00 0.306389
  18. 2012-03-10 00:00:00+00:00 -2.290613
  19. Freq: D, dtype: float64

Converting to another time zone:

  1. In [116]: ts_utc.tz_convert('US/Eastern')
  2. Out[116]:
  3. 2012-03-05 19:00:00-05:00 0.464000
  4. 2012-03-06 19:00:00-05:00 0.227371
  5. 2012-03-07 19:00:00-05:00 -0.496922
  6. 2012-03-08 19:00:00-05:00 0.306389
  7. 2012-03-09 19:00:00-05:00 -2.290613
  8. Freq: D, dtype: float64

Converting between time span representations:

  1. In [117]: rng = pd.date_range('1/1/2012', periods=5, freq='M')
  2. In [118]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
  3. In [119]: ts
  4. Out[119]:
  5. 2012-01-31 -1.134623
  6. 2012-02-29 -1.561819
  7. 2012-03-31 -0.260838
  8. 2012-04-30 0.281957
  9. 2012-05-31 1.523962
  10. Freq: M, dtype: float64
  11. In [120]: ps = ts.to_period()
  12. In [121]: ps
  13. Out[121]:
  14. 2012-01 -1.134623
  15. 2012-02 -1.561819
  16. 2012-03 -0.260838
  17. 2012-04 0.281957
  18. 2012-05 1.523962
  19. Freq: M, dtype: float64
  20. In [122]: ps.to_timestamp()
  21. Out[122]:
  22. 2012-01-01 -1.134623
  23. 2012-02-01 -1.561819
  24. 2012-03-01 -0.260838
  25. 2012-04-01 0.281957
  26. 2012-05-01 1.523962
  27. Freq: MS, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:

  1. In [123]: prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
  2. In [124]: ts = pd.Series(np.random.randn(len(prng)), prng)
  3. In [125]: ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
  4. In [126]: ts.head()
  5. Out[126]:
  6. 1990-03-01 09:00 -0.902937
  7. 1990-06-01 09:00 0.068159
  8. 1990-09-01 09:00 -0.057873
  9. 1990-12-01 09:00 -0.368204
  10. 1991-03-01 09:00 -1.144073
  11. Freq: H, dtype: float64

Categoricals

pandas can include categorical data in a DataFrame. For full docs, see the categorical introduction and the API documentation.

  1. In [127]: df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6],
  2. .....: "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']})
  3. .....:

Convert the raw grades to a categorical data type.

  1. In [128]: df["grade"] = df["raw_grade"].astype("category")
  2. In [129]: df["grade"]
  3. Out[129]:
  4. 0 a
  5. 1 b
  6. 2 b
  7. 3 a
  8. 4 a
  9. 5 e
  10. Name: grade, dtype: category
  11. Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!).

  1. In [130]: df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series by default).

  1. In [131]: df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium",
  2. .....: "good", "very good"])
  3. .....:
  4. In [132]: df["grade"]
  5. Out[132]:
  6. 0 very good
  7. 1 good
  8. 2 good
  9. 3 very good
  10. 4 very good
  11. 5 very bad
  12. Name: grade, dtype: category
  13. Categories (5, object): [very bad, bad, medium, good, very good]

Sorting is per order in the categories, not lexical order.

  1. In [133]: df.sort_values(by="grade")
  2. Out[133]:
  3. id raw_grade grade
  4. 5 6 e very bad
  5. 1 2 b good
  6. 2 3 b good
  7. 0 1 a very good
  8. 3 4 a very good
  9. 4 5 a very good

Grouping by a categorical column also shows empty categories.

  1. In [134]: df.groupby("grade").size()
  2. Out[134]:
  3. grade
  4. very bad 1
  5. bad 0
  6. medium 0
  7. good 2
  8. very good 3
  9. dtype: int64

Plotting

See the Plotting docs.

  1. In [135]: ts = pd.Series(np.random.randn(1000),
  2. .....: index=pd.date_range('1/1/2000', periods=1000))
  3. .....:
  4. In [136]: ts = ts.cumsum()
  5. In [137]: ts.plot()
  6. Out[137]: <matplotlib.axes._subplots.AxesSubplot at 0x7f2b5771ac88>

plotting

On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:

  1. In [138]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
  2. .....: columns=['A', 'B', 'C', 'D'])
  3. .....:
  4. In [139]: df = df.cumsum()
  5. In [140]: plt.figure()
  6. Out[140]: <Figure size 640x480 with 0 Axes>
  7. In [141]: df.plot()
  8. Out[141]: <matplotlib.axes._subplots.AxesSubplot at 0x7f2b53a2d7f0>
  9. In [142]: plt.legend(loc='best')
  10. Out[142]: <matplotlib.legend.Legend at 0x7f2b539728d0>

plotting

Getting Data In/Out

CSV

Writing to a csv file.

  1. In [143]: df.to_csv('foo.csv')

Reading from a csv file.

  1. In [144]: pd.read_csv('foo.csv')
  2. Out[144]:
  3. Unnamed: 0 A B C D
  4. 0 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 1 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 3 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 4 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. 5 2000-01-06 0.478344 0.449933 -0.741620 -1.962409
  10. 6 2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
  11. .. ... ... ... ... ...
  12. 993 2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
  13. 994 2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
  14. 995 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  15. 996 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  16. 997 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  17. 998 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  18. 999 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  19. [1000 rows x 5 columns]

HDF5

Reading and writing to HDFStores.

Writing to a HDF5 Store.

  1. In [145]: df.to_hdf('foo.h5', 'df')

Reading from a HDF5 Store.

  1. In [146]: pd.read_hdf('foo.h5', 'df')
  2. Out[146]:
  3. A B C D
  4. 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. 2000-01-06 0.478344 0.449933 -0.741620 -1.962409
  10. 2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
  11. ... ... ... ... ...
  12. 2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
  13. 2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
  14. 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  15. 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  16. 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  17. 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  18. 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  19. [1000 rows x 4 columns]

Excel

Reading and writing to MS Excel.

Writing to an excel file.

  1. In [147]: df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file.

  1. In [148]: pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
  2. Out[148]:
  3. Unnamed: 0 A B C D
  4. 0 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 1 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 3 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 4 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. 5 2000-01-06 0.478344 0.449933 -0.741620 -1.962409
  10. 6 2000-01-07 1.235339 -0.091757 -1.543861 -1.084753
  11. .. ... ... ... ... ...
  12. 993 2002-09-20 -10.628548 -9.153563 -7.883146 28.313940
  13. 994 2002-09-21 -10.390377 -8.727491 -6.399645 30.914107
  14. 995 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  15. 996 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  16. 997 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  17. 998 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  18. 999 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  19. [1000 rows x 5 columns]

Gotchas

If you are attempting to perform an operation you might see an exception like:

  1. >>> if pd.Series([False, True, False]):
  2. ... print("I was true")
  3. Traceback
  4. ...
  5. ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

See Comparisons for an explanation and what to do.

See Gotchas as well.