Cookbook
This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to this documentation.
Adding interesting links and/or inline examples to this section is a great First Pull Request.
Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for newer users.
These examples are written for Python 3. Minor tweaks might be necessary for earlier python versions.
Idioms
These are some neat pandas idioms
if-then/if-then-else on one column, and assignment to another one or more columns:
In [1]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],...: 'BBB': [10, 20, 30, 40],...: 'CCC': [100, 50, -30, -50]})...:In [2]: dfOut[2]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50
if-then…
An if-then on one column
In [3]: df.loc[df.AAA >= 5, 'BBB'] = -1In [4]: dfOut[4]:AAA BBB CCC0 4 10 1001 5 -1 502 6 -1 -303 7 -1 -50
An if-then with assignment to 2 columns:
In [5]: df.loc[df.AAA >= 5, ['BBB', 'CCC']] = 555In [6]: dfOut[6]:AAA BBB CCC0 4 10 1001 5 555 5552 6 555 5553 7 555 555
Add another line with different logic, to do the -else
In [7]: df.loc[df.AAA < 5, ['BBB', 'CCC']] = 2000In [8]: dfOut[8]:AAA BBB CCC0 4 2000 20001 5 555 5552 6 555 5553 7 555 555
Or use pandas where after you’ve set up a mask
In [9]: df_mask = pd.DataFrame({'AAA': [True] * 4,...: 'BBB': [False] * 4,...: 'CCC': [True, False] * 2})...:In [10]: df.where(df_mask, -1000)Out[10]:AAA BBB CCC0 4 -1000 20001 5 -1000 -10002 6 -1000 5553 7 -1000 -1000
if-then-else using numpy’s where()
In [11]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [12]: dfOut[12]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50In [13]: df['logic'] = np.where(df['AAA'] > 5, 'high', 'low')In [14]: dfOut[14]:AAA BBB CCC logic0 4 10 100 low1 5 20 50 low2 6 30 -30 high3 7 40 -50 high
Splitting
Split a frame with a boolean criterion
In [15]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [16]: dfOut[16]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50In [17]: df[df.AAA <= 5]Out[17]:AAA BBB CCC0 4 10 1001 5 20 50In [18]: df[df.AAA > 5]Out[18]:AAA BBB CCC2 6 30 -303 7 40 -50
Building criteria
Select with multi-column criteria
In [19]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [20]: dfOut[20]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50
…and (without assignment returns a Series)
In [21]: df.loc[(df['BBB'] < 25) & (df['CCC'] >= -40), 'AAA']Out[21]:0 41 5Name: AAA, dtype: int64
…or (without assignment returns a Series)
In [22]: df.loc[(df['BBB'] > 25) | (df['CCC'] >= -40), 'AAA']Out[22]:0 41 52 63 7Name: AAA, dtype: int64
…or (with assignment modifies the DataFrame.)
In [23]: df.loc[(df['BBB'] > 25) | (df['CCC'] >= 75), 'AAA'] = 0.1In [24]: dfOut[24]:AAA BBB CCC0 0.1 10 1001 5.0 20 502 0.1 30 -303 0.1 40 -50
Select rows with data closest to certain value using argsort
In [25]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [26]: dfOut[26]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50In [27]: aValue = 43.0In [28]: df.loc[(df.CCC - aValue).abs().argsort()]Out[28]:AAA BBB CCC1 5 20 500 4 10 1002 6 30 -303 7 40 -50
Dynamically reduce a list of criteria using a binary operators
In [29]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [30]: dfOut[30]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50In [31]: Crit1 = df.AAA <= 5.5In [32]: Crit2 = df.BBB == 10.0In [33]: Crit3 = df.CCC > -40.0
One could hard code:
In [34]: AllCrit = Crit1 & Crit2 & Crit3
…Or it can be done with a list of dynamically built criteria
In [35]: import functoolsIn [36]: CritList = [Crit1, Crit2, Crit3]In [37]: AllCrit = functools.reduce(lambda x, y: x & y, CritList)In [38]: df[AllCrit]Out[38]:AAA BBB CCC0 4 10 100
Selection
DataFrames
The indexing docs.
Using both row labels and value conditionals
In [39]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [40]: dfOut[40]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50In [41]: df[(df.AAA <= 6) & (df.index.isin([0, 2, 4]))]Out[41]:AAA BBB CCC0 4 10 1002 6 30 -30
Use loc for label-oriented slicing and iloc positional slicing
In [42]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]},....: index=['foo', 'bar', 'boo', 'kar'])....:
There are 2 explicit slicing methods, with a third general case
- Positional-oriented (Python slicing style : exclusive of end)
- Label-oriented (Non-Python slicing style : inclusive of end)
- General (Either slicing style : depends on if the slice contains labels or positions)
In [43]: df.loc['bar':'kar'] # LabelOut[43]:AAA BBB CCCbar 5 20 50boo 6 30 -30kar 7 40 -50# GenericIn [44]: df.iloc[0:3]Out[44]:AAA BBB CCCfoo 4 10 100bar 5 20 50boo 6 30 -30In [45]: df.loc['bar':'kar']Out[45]:AAA BBB CCCbar 5 20 50boo 6 30 -30kar 7 40 -50
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
In [46]: data = {'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]}....:In [47]: df2 = pd.DataFrame(data=data, index=[1, 2, 3, 4]) # Note index starts at 1.In [48]: df2.iloc[1:3] # Position-orientedOut[48]:AAA BBB CCC2 5 20 503 6 30 -30In [49]: df2.loc[1:3] # Label-orientedOut[49]:AAA BBB CCC1 4 10 1002 5 20 503 6 30 -30
Using inverse operator (~) to take the complement of a mask
In [50]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],....: 'BBB': [10, 20, 30, 40],....: 'CCC': [100, 50, -30, -50]})....:In [51]: dfOut[51]:AAA BBB CCC0 4 10 1001 5 20 502 6 30 -303 7 40 -50In [52]: df[~((df.AAA <= 6) & (df.index.isin([0, 2, 4])))]Out[52]:AAA BBB CCC1 5 20 503 7 40 -50
New columns
Efficiently and dynamically creating new columns using applymap
In [53]: df = pd.DataFrame({'AAA': [1, 2, 1, 3],....: 'BBB': [1, 1, 2, 2],....: 'CCC': [2, 1, 3, 1]})....:In [54]: dfOut[54]:AAA BBB CCC0 1 1 21 2 1 12 1 2 33 3 2 1In [55]: source_cols = df.columns # Or some subset would work tooIn [56]: new_cols = [str(x) + "_cat" for x in source_cols]In [57]: categories = {1: 'Alpha', 2: 'Beta', 3: 'Charlie'}In [58]: df[new_cols] = df[source_cols].applymap(categories.get)In [59]: dfOut[59]:AAA BBB CCC AAA_cat BBB_cat CCC_cat0 1 1 2 Alpha Alpha Beta1 2 1 1 Beta Alpha Alpha2 1 2 3 Alpha Beta Charlie3 3 2 1 Charlie Beta Alpha
Keep other columns when using min() with groupby
In [60]: df = pd.DataFrame({'AAA': [1, 1, 1, 2, 2, 2, 3, 3],....: 'BBB': [2, 1, 3, 4, 5, 1, 2, 3]})....:In [61]: dfOut[61]:AAA BBB0 1 21 1 12 1 33 2 44 2 55 2 16 3 27 3 3
Method 1 : idxmin() to get the index of the minimums
In [62]: df.loc[df.groupby("AAA")["BBB"].idxmin()]Out[62]:AAA BBB1 1 15 2 16 3 2
Method 2 : sort then take first of each
In [63]: df.sort_values(by="BBB").groupby("AAA", as_index=False).first()Out[63]:AAA BBB0 1 11 2 12 3 2
Notice the same results, with the exception of the index.
MultiIndexing
The multindexing docs.
Creating a MultiIndex from a labeled frame
In [64]: df = pd.DataFrame({'row': [0, 1, 2],....: 'One_X': [1.1, 1.1, 1.1],....: 'One_Y': [1.2, 1.2, 1.2],....: 'Two_X': [1.11, 1.11, 1.11],....: 'Two_Y': [1.22, 1.22, 1.22]})....:In [65]: dfOut[65]:row One_X One_Y Two_X Two_Y0 0 1.1 1.2 1.11 1.221 1 1.1 1.2 1.11 1.222 2 1.1 1.2 1.11 1.22# As Labelled IndexIn [66]: df = df.set_index('row')In [67]: dfOut[67]:One_X One_Y Two_X Two_Yrow0 1.1 1.2 1.11 1.221 1.1 1.2 1.11 1.222 1.1 1.2 1.11 1.22# With Hierarchical ColumnsIn [68]: df.columns = pd.MultiIndex.from_tuples([tuple(c.split('_'))....: for c in df.columns])....:In [69]: dfOut[69]:One TwoX Y X Yrow0 1.1 1.2 1.11 1.221 1.1 1.2 1.11 1.222 1.1 1.2 1.11 1.22# Now stack & ResetIn [70]: df = df.stack(0).reset_index(1)In [71]: dfOut[71]:level_1 X Yrow0 One 1.10 1.200 Two 1.11 1.221 One 1.10 1.201 Two 1.11 1.222 One 1.10 1.202 Two 1.11 1.22# And fix the labels (Notice the label 'level_1' got added automatically)In [72]: df.columns = ['Sample', 'All_X', 'All_Y']In [73]: dfOut[73]:Sample All_X All_Yrow0 One 1.10 1.200 Two 1.11 1.221 One 1.10 1.201 Two 1.11 1.222 One 1.10 1.202 Two 1.11 1.22
Arithmetic
Performing arithmetic with a MultiIndex that needs broadcasting
In [74]: cols = pd.MultiIndex.from_tuples([(x, y) for x in ['A', 'B', 'C']....: for y in ['O', 'I']])....:In [75]: df = pd.DataFrame(np.random.randn(2, 6), index=['n', 'm'], columns=cols)In [76]: dfOut[76]:A B CO I O I O In 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215m 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804In [77]: df = df.div(df['C'], level=1)In [78]: dfOut[78]:A B CO I O I O In 0.387021 1.633022 -1.244983 6.556214 1.0 1.0m -0.240860 -0.974279 1.741358 -1.963577 1.0 1.0
Slicing
In [79]: coords = [('AA', 'one'), ('AA', 'six'), ('BB', 'one'), ('BB', 'two'),....: ('BB', 'six')]....:In [80]: index = pd.MultiIndex.from_tuples(coords)In [81]: df = pd.DataFrame([11, 22, 33, 44, 55], index, ['MyData'])In [82]: dfOut[82]:MyDataAA one 11six 22BB one 33two 44six 55
To take the cross section of the 1st level and 1st axis the index:
# Note : level and axis are optional, and default to zeroIn [83]: df.xs('BB', level=0, axis=0)Out[83]:MyDataone 33two 44six 55
…and now the 2nd level of the 1st axis.
In [84]: df.xs('six', level=1, axis=0)Out[84]:MyDataAA 22BB 55
Slicing a MultiIndex with xs, method #2
In [85]: import itertoolsIn [86]: index = list(itertools.product(['Ada', 'Quinn', 'Violet'],....: ['Comp', 'Math', 'Sci']))....:In [87]: headr = list(itertools.product(['Exams', 'Labs'], ['I', 'II']))In [88]: indx = pd.MultiIndex.from_tuples(index, names=['Student', 'Course'])In [89]: cols = pd.MultiIndex.from_tuples(headr) # Notice these are un-namedIn [90]: data = [[70 + x + y + (x * y) % 3 for x in range(4)] for y in range(9)]In [91]: df = pd.DataFrame(data, indx, cols)In [92]: dfOut[92]:Exams LabsI II I IIStudent CourseAda Comp 70 71 72 73Math 71 73 75 74Sci 72 75 75 75Quinn Comp 73 74 75 76Math 74 76 78 77Sci 75 78 78 78Violet Comp 76 77 78 79Math 77 79 81 80Sci 78 81 81 81In [93]: All = slice(None)In [94]: df.loc['Violet']Out[94]:Exams LabsI II I IICourseComp 76 77 78 79Math 77 79 81 80Sci 78 81 81 81In [95]: df.loc[(All, 'Math'), All]Out[95]:Exams LabsI II I IIStudent CourseAda Math 71 73 75 74Quinn Math 74 76 78 77Violet Math 77 79 81 80In [96]: df.loc[(slice('Ada', 'Quinn'), 'Math'), All]Out[96]:Exams LabsI II I IIStudent CourseAda Math 71 73 75 74Quinn Math 74 76 78 77In [97]: df.loc[(All, 'Math'), ('Exams')]Out[97]:I IIStudent CourseAda Math 71 73Quinn Math 74 76Violet Math 77 79In [98]: df.loc[(All, 'Math'), (All, 'II')]Out[98]:Exams LabsII IIStudent CourseAda Math 73 74Quinn Math 76 77Violet Math 79 80
Setting portions of a MultiIndex with xs
Sorting
Sort by specific column or an ordered list of columns, with a MultiIndex
In [99]: df.sort_values(by=('Labs', 'II'), ascending=False)Out[99]:Exams LabsI II I IIStudent CourseViolet Sci 78 81 81 81Math 77 79 81 80Comp 76 77 78 79Quinn Sci 75 78 78 78Math 74 76 78 77Comp 73 74 75 76Ada Sci 72 75 75 75Math 71 73 75 74Comp 70 71 72 73
Partial selection, the need for sortedness;
Levels
Prepending a level to a multiindex
Missing data
The missing data docs.
Fill forward a reversed timeseries
In [100]: df = pd.DataFrame(np.random.randn(6, 1),.....: index=pd.date_range('2013-08-01', periods=6, freq='B'),.....: columns=list('A')).....:In [101]: df.loc[df.index[3], 'A'] = np.nanIn [102]: dfOut[102]:A2013-08-01 0.7215552013-08-02 -0.7067712013-08-05 -1.0395752013-08-06 NaN2013-08-07 -0.4249722013-08-08 0.567020In [103]: df.reindex(df.index[::-1]).ffill()Out[103]:A2013-08-08 0.5670202013-08-07 -0.4249722013-08-06 -0.4249722013-08-05 -1.0395752013-08-02 -0.7067712013-08-01 0.721555
Replace
Grouping
The grouping docs.
Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns
In [104]: df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),.....: 'size': list('SSMMMLL'),.....: 'weight': [8, 10, 11, 1, 20, 12, 12],.....: 'adult': [False] * 5 + [True] * 2}).....:In [105]: dfOut[105]:animal size weight adult0 cat S 8 False1 dog S 10 False2 cat M 11 False3 fish M 1 False4 dog M 20 False5 cat L 12 True6 cat L 12 True# List the size of the animals with the highest weight.In [106]: df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])Out[106]:animalcat Ldog Mfish Mdtype: object
In [107]: gb = df.groupby(['animal'])In [108]: gb.get_group('cat')Out[108]:animal size weight adult0 cat S 8 False2 cat M 11 False5 cat L 12 True6 cat L 12 True
Apply to different items in a group
In [109]: def GrowUp(x):.....: avg_weight = sum(x[x['size'] == 'S'].weight * 1.5).....: avg_weight += sum(x[x['size'] == 'M'].weight * 1.25).....: avg_weight += sum(x[x['size'] == 'L'].weight).....: avg_weight /= len(x).....: return pd.Series(['L', avg_weight, True],.....: index=['size', 'weight', 'adult']).....:In [110]: expected_df = gb.apply(GrowUp)In [111]: expected_dfOut[111]:size weight adultanimalcat L 12.4375 Truedog L 20.0000 Truefish L 1.2500 True
In [112]: S = pd.Series([i / 100.0 for i in range(1, 11)])In [113]: def cum_ret(x, y):.....: return x * (1 + y).....:In [114]: def red(x):.....: return functools.reduce(cum_ret, x, 1.0).....:In [115]: S.expanding().apply(red, raw=True)Out[115]:0 1.0100001 1.0302002 1.0611063 1.1035504 1.1587285 1.2282516 1.3142297 1.4193678 1.5471109 1.701821dtype: float64
Replacing some values with mean of the rest of a group
In [116]: df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, -1, 1, 2]})In [117]: gb = df.groupby('A')In [118]: def replace(g):.....: mask = g < 0.....: return g.where(mask, g[~mask].mean()).....:In [119]: gb.transform(replace)Out[119]:B0 1.01 -1.02 1.53 1.5
Sort groups by aggregated data
In [120]: df = pd.DataFrame({'code': ['foo', 'bar', 'baz'] * 2,.....: 'data': [0.16, -0.21, 0.33, 0.45, -0.59, 0.62],.....: 'flag': [False, True] * 3}).....:In [121]: code_groups = df.groupby('code')In [122]: agg_n_sort_order = code_groups[['data']].transform(sum).sort_values(by='data')In [123]: sorted_df = df.loc[agg_n_sort_order.index]In [124]: sorted_dfOut[124]:code data flag1 bar -0.21 True4 bar -0.59 False0 foo 0.16 False3 foo 0.45 True2 baz 0.33 False5 baz 0.62 True
Create multiple aggregated columns
In [125]: rng = pd.date_range(start="2014-10-07", periods=10, freq='2min')In [126]: ts = pd.Series(data=list(range(10)), index=rng)In [127]: def MyCust(x):.....: if len(x) > 2:.....: return x[1] * 1.234.....: return pd.NaT.....:In [128]: mhc = {'Mean': np.mean, 'Max': np.max, 'Custom': MyCust}In [129]: ts.resample("5min").apply(mhc)Out[129]:Mean 2014-10-07 00:00:00 12014-10-07 00:05:00 3.52014-10-07 00:10:00 62014-10-07 00:15:00 8.5Max 2014-10-07 00:00:00 22014-10-07 00:05:00 42014-10-07 00:10:00 72014-10-07 00:15:00 9Custom 2014-10-07 00:00:00 1.2342014-10-07 00:05:00 NaT2014-10-07 00:10:00 7.4042014-10-07 00:15:00 NaTdtype: objectIn [130]: tsOut[130]:2014-10-07 00:00:00 02014-10-07 00:02:00 12014-10-07 00:04:00 22014-10-07 00:06:00 32014-10-07 00:08:00 42014-10-07 00:10:00 52014-10-07 00:12:00 62014-10-07 00:14:00 72014-10-07 00:16:00 82014-10-07 00:18:00 9Freq: 2T, dtype: int64
Create a value counts column and reassign back to the DataFrame
In [131]: df = pd.DataFrame({'Color': 'Red Red Red Blue'.split(),.....: 'Value': [100, 150, 50, 50]}).....:In [132]: dfOut[132]:Color Value0 Red 1001 Red 1502 Red 503 Blue 50In [133]: df['Counts'] = df.groupby(['Color']).transform(len)In [134]: dfOut[134]:Color Value Counts0 Red 100 31 Red 150 32 Red 50 33 Blue 50 1
Shift groups of the values in a column based on the index
In [135]: df = pd.DataFrame({'line_race': [10, 10, 8, 10, 10, 8],.....: 'beyer': [99, 102, 103, 103, 88, 100]},.....: index=['Last Gunfighter', 'Last Gunfighter',.....: 'Last Gunfighter', 'Paynter', 'Paynter',.....: 'Paynter']).....:In [136]: dfOut[136]:line_race beyerLast Gunfighter 10 99Last Gunfighter 10 102Last Gunfighter 8 103Paynter 10 103Paynter 10 88Paynter 8 100In [137]: df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1)In [138]: dfOut[138]:line_race beyer beyer_shiftedLast Gunfighter 10 99 NaNLast Gunfighter 10 102 99.0Last Gunfighter 8 103 102.0Paynter 10 103 NaNPaynter 10 88 103.0Paynter 8 100 88.0
Select row with maximum value from each group
In [139]: df = pd.DataFrame({'host': ['other', 'other', 'that', 'this', 'this'],.....: 'service': ['mail', 'web', 'mail', 'mail', 'web'],.....: 'no': [1, 2, 1, 2, 1]}).set_index(['host', 'service']).....:In [140]: mask = df.groupby(level=0).agg('idxmax')In [141]: df_count = df.loc[mask['no']].reset_index()In [142]: df_countOut[142]:host service no0 other web 21 that mail 12 this mail 2
Grouping like Python’s itertools.groupby
In [143]: df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=['A'])In [144]: df.A.groupby((df.A != df.A.shift()).cumsum()).groupsOut[144]:{1: Int64Index([0], dtype='int64'),2: Int64Index([1], dtype='int64'),3: Int64Index([2], dtype='int64'),4: Int64Index([3, 4, 5], dtype='int64'),5: Int64Index([6], dtype='int64'),6: Int64Index([7, 8], dtype='int64')}In [145]: df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()Out[145]:0 01 12 03 14 25 36 07 18 2Name: A, dtype: int64
Expanding data
Rolling Computation window based on values instead of counts
Splitting
Create a list of dataframes, split using a delineation based on logic included in rows.
In [146]: df = pd.DataFrame(data={'Case': ['A', 'A', 'A', 'B', 'A', 'A', 'B', 'A',.....: 'A'],.....: 'Data': np.random.randn(9)}).....:In [147]: dfs = list(zip(*df.groupby((1 * (df['Case'] == 'B')).cumsum().....: .rolling(window=3, min_periods=1).median())))[-1].....:In [148]: dfs[0]Out[148]:Case Data0 A 0.2762321 A -1.0874012 A -0.6736903 B 0.113648In [149]: dfs[1]Out[149]:Case Data4 A -1.4784275 A 0.5249886 B 0.404705In [150]: dfs[2]Out[150]:Case Data7 A 0.5770468 A -1.715002
Pivot
The Pivot docs.
In [151]: df = pd.DataFrame(data={'Province': ['ON', 'QC', 'BC', 'AL', 'AL', 'MN', 'ON'],.....: 'City': ['Toronto', 'Montreal', 'Vancouver',.....: 'Calgary', 'Edmonton', 'Winnipeg',.....: 'Windsor'],.....: 'Sales': [13, 6, 16, 8, 4, 3, 1]}).....:In [152]: table = pd.pivot_table(df, values=['Sales'], index=['Province'],.....: columns=['City'], aggfunc=np.sum, margins=True).....:In [153]: table.stack('City')Out[153]:SalesProvince CityAL All 12.0Calgary 8.0Edmonton 4.0BC All 16.0Vancouver 16.0... ...All Montreal 6.0Toronto 13.0Vancouver 16.0Windsor 1.0Winnipeg 3.0[20 rows x 1 columns]
Frequency table like plyr in R
In [154]: grades = [48, 99, 75, 80, 42, 80, 72, 68, 36, 78]In [155]: df = pd.DataFrame({'ID': ["x%d" % r for r in range(10)],.....: 'Gender': ['F', 'M', 'F', 'M', 'F',.....: 'M', 'F', 'M', 'M', 'M'],.....: 'ExamYear': ['2007', '2007', '2007', '2008', '2008',.....: '2008', '2008', '2009', '2009', '2009'],.....: 'Class': ['algebra', 'stats', 'bio', 'algebra',.....: 'algebra', 'stats', 'stats', 'algebra',.....: 'bio', 'bio'],.....: 'Participated': ['yes', 'yes', 'yes', 'yes', 'no',.....: 'yes', 'yes', 'yes', 'yes', 'yes'],.....: 'Passed': ['yes' if x > 50 else 'no' for x in grades],.....: 'Employed': [True, True, True, False,.....: False, False, False, True, True, False],.....: 'Grade': grades}).....:In [156]: df.groupby('ExamYear').agg({'Participated': lambda x: x.value_counts()['yes'],.....: 'Passed': lambda x: sum(x == 'yes'),.....: 'Employed': lambda x: sum(x),.....: 'Grade': lambda x: sum(x) / len(x)}).....:Out[156]:Participated Passed Employed GradeExamYear2007 3 2 3 74.0000002008 3 3 0 68.5000002009 3 2 2 60.666667
Plot pandas DataFrame with year over year data
To create year and month cross tabulation:
In [157]: df = pd.DataFrame({'value': np.random.randn(36)},.....: index=pd.date_range('2011-01-01', freq='M', periods=36)).....:In [158]: pd.pivot_table(df, index=df.index.month, columns=df.index.year,.....: values='value', aggfunc='sum').....:Out[158]:2011 2012 20131 -1.039268 -0.968914 2.5656462 -0.370647 -1.294524 1.4312563 -1.157892 0.413738 1.3403094 -1.344312 0.276662 -1.1702995 0.844885 -0.472035 -0.2261696 1.075770 -0.013960 0.4108357 -0.109050 -0.362543 0.8138508 1.643563 -0.006154 0.1320039 -1.469388 -0.923061 -0.82731710 0.357021 0.895717 -0.07646711 -0.674600 0.805244 -1.18767812 -1.776904 -1.206412 1.130127
Apply
Rolling apply to organize - Turning embedded lists into a MultiIndex frame
In [159]: df = pd.DataFrame(data={'A': [[2, 4, 8, 16], [100, 200], [10, 20, 30]],.....: 'B': [['a', 'b', 'c'], ['jj', 'kk'], ['ccc']]},.....: index=['I', 'II', 'III']).....:In [160]: def SeriesFromSubList(aList):.....: return pd.Series(aList).....:In [161]: df_orgz = pd.concat({ind: row.apply(SeriesFromSubList).....: for ind, row in df.iterrows()}).....:In [162]: df_orgzOut[162]:0 1 2 3I A 2 4 8 16.0B a b c NaNII A 100 200 NaN NaNB jj kk NaN NaNIII A 10 20 30 NaNB ccc NaN NaN NaN
Rolling apply with a DataFrame returning a Series
Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned
In [163]: df = pd.DataFrame(data=np.random.randn(2000, 2) / 10000,.....: index=pd.date_range('2001-01-01', periods=2000),.....: columns=['A', 'B']).....:In [164]: dfOut[164]:A B2001-01-01 -0.000144 -0.0001412001-01-02 0.000161 0.0001022001-01-03 0.000057 0.0000882001-01-04 -0.000221 0.0000972001-01-05 -0.000201 -0.000041... ... ...2006-06-19 0.000040 -0.0002352006-06-20 -0.000123 -0.0000212006-06-21 -0.000113 0.0001142006-06-22 0.000136 0.0001092006-06-23 0.000027 0.000030[2000 rows x 2 columns]In [165]: def gm(df, const):.....: v = ((((df.A + df.B) + 1).cumprod()) - 1) * const.....: return v.iloc[-1].....:In [166]: s = pd.Series({df.index[i]: gm(df.iloc[i:min(i + 51, len(df) - 1)], 5).....: for i in range(len(df) - 50)}).....:In [167]: sOut[167]:2001-01-01 0.0009302001-01-02 0.0026152001-01-03 0.0012812001-01-04 0.0011172001-01-05 0.002772...2006-04-30 0.0032962006-05-01 0.0026292006-05-02 0.0020812006-05-03 0.0042472006-05-04 0.003928Length: 1950, dtype: float64
Rolling apply with a DataFrame returning a Scalar
Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)
In [168]: rng = pd.date_range(start='2014-01-01', periods=100)In [169]: df = pd.DataFrame({'Open': np.random.randn(len(rng)),.....: 'Close': np.random.randn(len(rng)),.....: 'Volume': np.random.randint(100, 2000, len(rng))},.....: index=rng).....:In [170]: dfOut[170]:Open Close Volume2014-01-01 -1.611353 -0.492885 12192014-01-02 -3.000951 0.445794 10542014-01-03 -0.138359 -0.076081 13812014-01-04 0.301568 1.198259 12532014-01-05 0.276381 -0.669831 1728... ... ... ...2014-04-06 -0.040338 0.937843 11882014-04-07 0.359661 -0.285908 18642014-04-08 0.060978 1.714814 9412014-04-09 1.759055 -0.455942 10652014-04-10 0.138185 -1.147008 1453[100 rows x 3 columns]In [171]: def vwap(bars):.....: return ((bars.Close * bars.Volume).sum() / bars.Volume.sum()).....:In [172]: window = 5In [173]: s = pd.concat([(pd.Series(vwap(df.iloc[i:i + window]),.....: index=[df.index[i + window]])).....: for i in range(len(df) - window)]).....:In [174]: s.round(2)Out[174]:2014-01-06 0.022014-01-07 0.112014-01-08 0.102014-01-09 0.072014-01-10 -0.29...2014-04-06 -0.632014-04-07 -0.022014-04-08 -0.032014-04-09 0.342014-04-10 0.29Length: 95, dtype: float64
Timeseries
Constructing a datetime range that excludes weekends and includes only certain times
Aggregation and plotting time series
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series. How to rearrange a Python pandas DataFrame?
Dealing with duplicates when reindexing a timeseries to a specified frequency
Calculate the first day of the month for each entry in a DatetimeIndex
In [175]: dates = pd.date_range('2000-01-01', periods=5)In [176]: dates.to_period(freq='M').to_timestamp()Out[176]:DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01','2000-01-01'],dtype='datetime64[ns]', freq=None)
Resampling
The Resample docs.
Using Grouper instead of TimeGrouper for time grouping of values
Time grouping with some missing values
Valid frequency arguments to Grouper
Using TimeGrouper and another grouping to create subgroups, then apply a custom function
Resampling with custom periods
Resample intraday frame without adding new days
Merge
The Concat docs. The Join docs.
Append two dataframes with overlapping index (emulate R rbind)
In [177]: rng = pd.date_range('2000-01-01', periods=6)In [178]: df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C'])In [179]: df2 = df1.copy()
Depending on df construction, ignore_index may be needed
In [180]: df = df1.append(df2, ignore_index=True)In [181]: dfOut[181]:A B C0 -0.870117 -0.479265 -0.7908551 0.144817 1.726395 -0.4645352 -0.821906 1.597605 0.1873073 -0.128342 -1.511638 -0.2898584 0.399194 -1.430030 -0.6397605 1.115116 -2.012600 1.8106626 -0.870117 -0.479265 -0.7908557 0.144817 1.726395 -0.4645358 -0.821906 1.597605 0.1873079 -0.128342 -1.511638 -0.28985810 0.399194 -1.430030 -0.63976011 1.115116 -2.012600 1.810662
In [182]: df = pd.DataFrame(data={'Area': ['A'] * 5 + ['C'] * 2,.....: 'Bins': [110] * 2 + [160] * 3 + [40] * 2,.....: 'Test_0': [0, 1, 0, 1, 2, 0, 1],.....: 'Data': np.random.randn(7)}).....:In [183]: dfOut[183]:Area Bins Test_0 Data0 A 110 0 -0.4339371 A 110 1 -0.1605522 A 160 0 0.7444343 A 160 1 1.7542134 A 160 2 0.0008505 C 40 0 0.3422436 C 40 1 1.070599In [184]: df['Test_1'] = df['Test_0'] - 1In [185]: pd.merge(df, df, left_on=['Bins', 'Area', 'Test_0'],.....: right_on=['Bins', 'Area', 'Test_1'],.....: suffixes=('_L', '_R')).....:Out[185]:Area Bins Test_0_L Data_L Test_1_L Test_0_R Data_R Test_1_R0 A 110 0 -0.433937 -1 1 -0.160552 01 A 160 0 0.744434 -1 1 1.754213 02 A 160 1 1.754213 0 2 0.000850 13 C 40 0 0.342243 -1 1 1.070599 0
Join with a criteria based on the values
Using searchsorted to merge based on values inside a range
Plotting
The Plotting docs.
Setting x-axis major and minor labels
Plotting multiple charts in an ipython notebook
Annotate a time-series plot #2
Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter
Boxplot for each quartile of a stratifying variable
In [186]: df = pd.DataFrame(.....: {'stratifying_var': np.random.uniform(0, 100, 20),.....: 'price': np.random.normal(100, 5, 20)}).....:In [187]: df['quartiles'] = pd.qcut(.....: df['stratifying_var'],.....: 4,.....: labels=['0-25%', '25-50%', '50-75%', '75-100%']).....:In [188]: df.boxplot(column='price', by='quartiles')Out[188]: <matplotlib.axes._subplots.AxesSubplot at 0x7f65f77e6470>

Data In/Out
Performance comparison of SQL vs HDF5
CSV
The CSV docs
Reading only certain rows of a csv chunk-by-chunk
Reading the first few lines of a frame
Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands).
This example shows a WinZipped file, but is a general application of opening the file within a context manager and
using that handle to read.
See here
Reading CSV with Unix timestamps and converting to local timezone
Write a multi-row index CSV without writing duplicates
Reading multiple files to create a single DataFrame
The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all
of the individual frames into a list, and then combine the frames in the list using pd.concat():
In [189]: for i in range(3):.....: data = pd.DataFrame(np.random.randn(10, 4)).....: data.to_csv('file_{}.csv'.format(i)).....:In [190]: files = ['file_0.csv', 'file_1.csv', 'file_2.csv']In [191]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
You can use the same approach to read all files matching a pattern. Here is an example using glob:
In [192]: import globIn [193]: import osIn [194]: files = glob.glob('file_*.csv')In [195]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs.
Parsing date components in multi-columns
Parsing date components in multi-columns is faster with a format
In [196]: i = pd.date_range('20000101', periods=10000)In [197]: df = pd.DataFrame({'year': i.year, 'month': i.month, 'day': i.day})In [198]: df.head()Out[198]:year month day0 2000 1 11 2000 1 22 2000 1 33 2000 1 44 2000 1 5In [199]: %timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d').....: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],.....: x['month'], x['day']), axis=1).....: ds.head().....: %timeit pd.to_datetime(ds).....:9.36 ms +- 106 us per loop (mean +- std. dev. of 7 runs, 100 loops each)2.88 ms +- 34.5 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Skip row between header and data
In [200]: data = """;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: ;;;;.....: date;Param1;Param2;Param4;Param5.....: ;m²;°C;m²;m.....: ;;;;.....: 01.01.1990 00:00;1;1;2;3.....: 01.01.1990 01:00;5;3;4;5.....: 01.01.1990 02:00;9;5;6;7.....: 01.01.1990 03:00;13;7;8;9.....: 01.01.1990 04:00;17;9;10;11.....: 01.01.1990 05:00;21;11;12;13.....: """.....:
Option 1: pass rows explicitly to skip rows
In [201]: from io import StringIOIn [202]: pd.read_csv(StringIO(data), sep=';', skiprows=[11, 12],.....: index_col=0, parse_dates=True, header=10).....:Out[202]:Param1 Param2 Param4 Param5date1990-01-01 00:00:00 1 1 2 31990-01-01 01:00:00 5 3 4 51990-01-01 02:00:00 9 5 6 71990-01-01 03:00:00 13 7 8 91990-01-01 04:00:00 17 9 10 111990-01-01 05:00:00 21 11 12 13
Option 2: read column names and then data
In [203]: pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columnsOut[203]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')In [204]: columns = pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columnsIn [205]: pd.read_csv(StringIO(data), sep=';', index_col=0,.....: header=12, parse_dates=True, names=columns).....:Out[205]:Param1 Param2 Param4 Param5date1990-01-01 00:00:00 1 1 2 31990-01-01 01:00:00 5 3 4 51990-01-01 02:00:00 9 5 6 71990-01-01 03:00:00 13 7 8 91990-01-01 04:00:00 17 9 10 111990-01-01 05:00:00 21 11 12 13
SQL
The SQL docs
Reading from databases with SQL
Excel
The Excel docs
Reading from a filelike handle
Modifying formatting in XlsxWriter output
HTML
Reading HTML tables from a server that cannot handle the default request header
HDFStore
The HDFStores docs
Simple queries with a Timestamp Index
Managing heterogeneous data using a linked multiple table hierarchy
Merging on-disk tables with millions of rows
Avoiding inconsistencies when writing to a store from multiple processes/threads
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here
Creating a store chunk-by-chunk from a csv file
Appending to a store, while creating a unique index
Reading in a sequence of files, then providing a global unique index to a store while appending
Groupby on a HDFStore with low group density
Groupby on a HDFStore with high group density
Hierarchical queries on a HDFStore
Troubleshoot HDFStore exceptions
Setting min_itemsize with strings
Using ptrepack to create a completely-sorted-index on a store
Storing Attributes to a group node
In [206]: df = pd.DataFrame(np.random.randn(8, 3))In [207]: store = pd.HDFStore('test.h5')In [208]: store.put('df', df)# you can store an arbitrary Python object via pickleIn [209]: store.get_storer('df').attrs.my_attribute = {'A': 10}In [210]: store.get_storer('df').attrs.my_attributeOut[210]: {'A': 10}
Binary files
pandas readily accepts NumPy record arrays, if you need to read in a binary
file consisting of an array of C structs. For example, given this C program
in a file called main.c compiled with gcc main.c -std=gnu99 on a
64-bit machine,
#include <stdio.h>#include <stdint.h>typedef struct _Data{int32_t count;double avg;float scale;} Data;int main(int argc, const char *argv[]){size_t n = 10;Data d[n];for (int i = 0; i < n; ++i){d[i].count = i;d[i].avg = i + 1.0;d[i].scale = (float) i + 2.0f;}FILE *file = fopen("binary.dat", "wb");fwrite(&d, sizeof(Data), n, file);fclose(file);return 0;}
the following Python code will read the binary file 'binary.dat' into a
pandas DataFrame, where each element of the struct corresponds to a column
in the frame:
names = 'count', 'avg', 'scale'# note that the offsets are larger than the size of the type because of# struct paddingoffsets = 0, 8, 16formats = 'i4', 'f8', 'f4'dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},align=True)df = pd.DataFrame(np.fromfile('binary.dat', dt))
::: tip Note
The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas’ IO facilities.
:::
Computation
Numerical integration (sample-based) of a time series
Correlation
Often it’s useful to obtain the lower (or upper) triangular form of a correlation matrix calculated from DataFrame.corr(). This can be achieved by passing a boolean mask to where as follows:
In [211]: df = pd.DataFrame(np.random.random(size=(100, 5)))In [212]: corr_mat = df.corr()In [213]: mask = np.tril(np.ones_like(corr_mat, dtype=np.bool), k=-1)In [214]: corr_mat.where(mask)Out[214]:0 1 2 3 40 NaN NaN NaN NaN NaN1 -0.018923 NaN NaN NaN NaN2 -0.076296 -0.012464 NaN NaN NaN3 -0.169941 -0.289416 0.076462 NaN NaN4 0.064326 0.018759 -0.084140 -0.079859 NaN
The method argument within DataFrame.corr can accept a callable in addition to the named correlation types. Here we compute the distance correlation matrix for a DataFrame object.
In [215]: def distcorr(x, y):.....: n = len(x).....: a = np.zeros(shape=(n, n)).....: b = np.zeros(shape=(n, n)).....: for i in range(n):.....: for j in range(i + 1, n):.....: a[i, j] = abs(x[i] - x[j]).....: b[i, j] = abs(y[i] - y[j]).....: a += a.T.....: b += b.T.....: a_bar = np.vstack([np.nanmean(a, axis=0)] * n).....: b_bar = np.vstack([np.nanmean(b, axis=0)] * n).....: A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean()).....: B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean()).....: cov_ab = np.sqrt(np.nansum(A * B)) / n.....: std_a = np.sqrt(np.sqrt(np.nansum(A**2)) / n).....: std_b = np.sqrt(np.sqrt(np.nansum(B**2)) / n).....: return cov_ab / std_a / std_b.....:In [216]: df = pd.DataFrame(np.random.normal(size=(100, 3)))In [217]: df.corr(method=distcorr)Out[217]:0 1 20 1.000000 0.199653 0.2148711 0.199653 1.000000 0.1951162 0.214871 0.195116 1.000000
Timedeltas
The Timedeltas docs.
In [218]: import datetimeIn [219]: s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))In [220]: s - s.max()Out[220]:0 -2 days1 -1 days2 0 daysdtype: timedelta64[ns]In [221]: s.max() - sOut[221]:0 2 days1 1 days2 0 daysdtype: timedelta64[ns]In [222]: s - datetime.datetime(2011, 1, 1, 3, 5)Out[222]:0 364 days 20:55:001 365 days 20:55:002 366 days 20:55:00dtype: timedelta64[ns]In [223]: s + datetime.timedelta(minutes=5)Out[223]:0 2012-01-01 00:05:001 2012-01-02 00:05:002 2012-01-03 00:05:00dtype: datetime64[ns]In [224]: datetime.datetime(2011, 1, 1, 3, 5) - sOut[224]:0 -365 days +03:05:001 -366 days +03:05:002 -367 days +03:05:00dtype: timedelta64[ns]In [225]: datetime.timedelta(minutes=5) + sOut[225]:0 2012-01-01 00:05:001 2012-01-02 00:05:002 2012-01-03 00:05:00dtype: datetime64[ns]
Adding and subtracting deltas and dates
In [226]: deltas = pd.Series([datetime.timedelta(days=i) for i in range(3)])In [227]: df = pd.DataFrame({'A': s, 'B': deltas})In [228]: dfOut[228]:A B0 2012-01-01 0 days1 2012-01-02 1 days2 2012-01-03 2 daysIn [229]: df['New Dates'] = df['A'] + df['B']In [230]: df['Delta'] = df['A'] - df['New Dates']In [231]: dfOut[231]:A B New Dates Delta0 2012-01-01 0 days 2012-01-01 0 days1 2012-01-02 1 days 2012-01-03 -1 days2 2012-01-03 2 days 2012-01-05 -2 daysIn [232]: df.dtypesOut[232]:A datetime64[ns]B timedelta64[ns]New Dates datetime64[ns]Delta timedelta64[ns]dtype: object
Values can be set to NaT using np.nan, similar to datetime
In [233]: y = s - s.shift()In [234]: yOut[234]:0 NaT1 1 days2 1 daysdtype: timedelta64[ns]In [235]: y[1] = np.nanIn [236]: yOut[236]:0 NaT1 NaT2 1 daysdtype: timedelta64[ns]
Aliasing axis names
To globally provide aliases for axis names, one can define these 2 functions:
In [237]: def set_axis_alias(cls, axis, alias):.....: if axis not in cls._AXIS_NUMBERS:.....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias)).....: cls._AXIS_ALIASES[alias] = axis.....:
In [238]: def clear_axis_alias(cls, axis, alias):.....: if axis not in cls._AXIS_NUMBERS:.....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias)).....: cls._AXIS_ALIASES.pop(alias, None).....:
In [239]: set_axis_alias(pd.DataFrame, 'columns', 'myaxis2')In [240]: df2 = pd.DataFrame(np.random.randn(3, 2), columns=['c1', 'c2'],.....: index=['i1', 'i2', 'i3']).....:In [241]: df2.sum(axis='myaxis2')Out[241]:i1 -0.461013i2 2.040016i3 0.904681dtype: float64In [242]: clear_axis_alias(pd.DataFrame, 'columns', 'myaxis2')
Creating example data
To create a dataframe from every combination of some given values, like R’s expand.grid()
function, we can create a dict where the keys are column names and the values are lists
of the data values:
In [243]: def expand_grid(data_dict):.....: rows = itertools.product(*data_dict.values()).....: return pd.DataFrame.from_records(rows, columns=data_dict.keys()).....:In [244]: df = expand_grid({'height': [60, 70],.....: 'weight': [100, 140, 180],.....: 'sex': ['Male', 'Female']}).....:In [245]: dfOut[245]:height weight sex0 60 100 Male1 60 100 Female2 60 140 Male3 60 140 Female4 60 180 Male5 60 180 Female6 70 100 Male7 70 100 Female8 70 140 Male9 70 140 Female10 70 180 Male11 70 180 Female
