Cookbook

This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to this documentation.

Adding interesting links and/or inline examples to this section is a great First Pull Request.

Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer.

Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for newer users.

These examples are written for Python 3. Minor tweaks might be necessary for earlier python versions.

Idioms

These are some neat pandas idioms

if-then/if-then-else on one column, and assignment to another one or more columns:

  1. In [1]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ...: 'BBB': [10, 20, 30, 40],
  3. ...: 'CCC': [100, 50, -30, -50]})
  4. ...:
  5. In [2]: df
  6. Out[2]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50

if-then…

An if-then on one column

  1. In [3]: df.loc[df.AAA >= 5, 'BBB'] = -1
  2. In [4]: df
  3. Out[4]:
  4. AAA BBB CCC
  5. 0 4 10 100
  6. 1 5 -1 50
  7. 2 6 -1 -30
  8. 3 7 -1 -50

An if-then with assignment to 2 columns:

  1. In [5]: df.loc[df.AAA >= 5, ['BBB', 'CCC']] = 555
  2. In [6]: df
  3. Out[6]:
  4. AAA BBB CCC
  5. 0 4 10 100
  6. 1 5 555 555
  7. 2 6 555 555
  8. 3 7 555 555

Add another line with different logic, to do the -else

  1. In [7]: df.loc[df.AAA < 5, ['BBB', 'CCC']] = 2000
  2. In [8]: df
  3. Out[8]:
  4. AAA BBB CCC
  5. 0 4 2000 2000
  6. 1 5 555 555
  7. 2 6 555 555
  8. 3 7 555 555

Or use pandas where after you’ve set up a mask

  1. In [9]: df_mask = pd.DataFrame({'AAA': [True] * 4,
  2. ...: 'BBB': [False] * 4,
  3. ...: 'CCC': [True, False] * 2})
  4. ...:
  5. In [10]: df.where(df_mask, -1000)
  6. Out[10]:
  7. AAA BBB CCC
  8. 0 4 -1000 2000
  9. 1 5 -1000 -1000
  10. 2 6 -1000 555
  11. 3 7 -1000 -1000

if-then-else using numpy’s where()

  1. In [11]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [12]: df
  6. Out[12]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50
  12. In [13]: df['logic'] = np.where(df['AAA'] > 5, 'high', 'low')
  13. In [14]: df
  14. Out[14]:
  15. AAA BBB CCC logic
  16. 0 4 10 100 low
  17. 1 5 20 50 low
  18. 2 6 30 -30 high
  19. 3 7 40 -50 high

Splitting

Split a frame with a boolean criterion

  1. In [15]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [16]: df
  6. Out[16]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50
  12. In [17]: df[df.AAA <= 5]
  13. Out[17]:
  14. AAA BBB CCC
  15. 0 4 10 100
  16. 1 5 20 50
  17. In [18]: df[df.AAA > 5]
  18. Out[18]:
  19. AAA BBB CCC
  20. 2 6 30 -30
  21. 3 7 40 -50

Building criteria

Select with multi-column criteria

  1. In [19]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [20]: df
  6. Out[20]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50

…and (without assignment returns a Series)

  1. In [21]: df.loc[(df['BBB'] < 25) & (df['CCC'] >= -40), 'AAA']
  2. Out[21]:
  3. 0 4
  4. 1 5
  5. Name: AAA, dtype: int64

…or (without assignment returns a Series)

  1. In [22]: df.loc[(df['BBB'] > 25) | (df['CCC'] >= -40), 'AAA']
  2. Out[22]:
  3. 0 4
  4. 1 5
  5. 2 6
  6. 3 7
  7. Name: AAA, dtype: int64

…or (with assignment modifies the DataFrame.)

  1. In [23]: df.loc[(df['BBB'] > 25) | (df['CCC'] >= 75), 'AAA'] = 0.1
  2. In [24]: df
  3. Out[24]:
  4. AAA BBB CCC
  5. 0 0.1 10 100
  6. 1 5.0 20 50
  7. 2 0.1 30 -30
  8. 3 0.1 40 -50

Select rows with data closest to certain value using argsort

  1. In [25]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [26]: df
  6. Out[26]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50
  12. In [27]: aValue = 43.0
  13. In [28]: df.loc[(df.CCC - aValue).abs().argsort()]
  14. Out[28]:
  15. AAA BBB CCC
  16. 1 5 20 50
  17. 0 4 10 100
  18. 2 6 30 -30
  19. 3 7 40 -50

Dynamically reduce a list of criteria using a binary operators

  1. In [29]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [30]: df
  6. Out[30]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50
  12. In [31]: Crit1 = df.AAA <= 5.5
  13. In [32]: Crit2 = df.BBB == 10.0
  14. In [33]: Crit3 = df.CCC > -40.0

One could hard code:

  1. In [34]: AllCrit = Crit1 & Crit2 & Crit3

…Or it can be done with a list of dynamically built criteria

  1. In [35]: import functools
  2. In [36]: CritList = [Crit1, Crit2, Crit3]
  3. In [37]: AllCrit = functools.reduce(lambda x, y: x & y, CritList)
  4. In [38]: df[AllCrit]
  5. Out[38]:
  6. AAA BBB CCC
  7. 0 4 10 100

Selection

DataFrames

The indexing docs.

Using both row labels and value conditionals

  1. In [39]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [40]: df
  6. Out[40]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50
  12. In [41]: df[(df.AAA <= 6) & (df.index.isin([0, 2, 4]))]
  13. Out[41]:
  14. AAA BBB CCC
  15. 0 4 10 100
  16. 2 6 30 -30

Use loc for label-oriented slicing and iloc positional slicing

  1. In [42]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]},
  4. ....: index=['foo', 'bar', 'boo', 'kar'])
  5. ....:

There are 2 explicit slicing methods, with a third general case

  1. Positional-oriented (Python slicing style : exclusive of end)
  2. Label-oriented (Non-Python slicing style : inclusive of end)
  3. General (Either slicing style : depends on if the slice contains labels or positions)
  1. In [43]: df.loc['bar':'kar'] # Label
  2. Out[43]:
  3. AAA BBB CCC
  4. bar 5 20 50
  5. boo 6 30 -30
  6. kar 7 40 -50
  7. # Generic
  8. In [44]: df.iloc[0:3]
  9. Out[44]:
  10. AAA BBB CCC
  11. foo 4 10 100
  12. bar 5 20 50
  13. boo 6 30 -30
  14. In [45]: df.loc['bar':'kar']
  15. Out[45]:
  16. AAA BBB CCC
  17. bar 5 20 50
  18. boo 6 30 -30
  19. kar 7 40 -50

Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.

  1. In [46]: data = {'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]}
  4. ....:
  5. In [47]: df2 = pd.DataFrame(data=data, index=[1, 2, 3, 4]) # Note index starts at 1.
  6. In [48]: df2.iloc[1:3] # Position-oriented
  7. Out[48]:
  8. AAA BBB CCC
  9. 2 5 20 50
  10. 3 6 30 -30
  11. In [49]: df2.loc[1:3] # Label-oriented
  12. Out[49]:
  13. AAA BBB CCC
  14. 1 4 10 100
  15. 2 5 20 50
  16. 3 6 30 -30

Using inverse operator (~) to take the complement of a mask

  1. In [50]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5. In [51]: df
  6. Out[51]:
  7. AAA BBB CCC
  8. 0 4 10 100
  9. 1 5 20 50
  10. 2 6 30 -30
  11. 3 7 40 -50
  12. In [52]: df[~((df.AAA <= 6) & (df.index.isin([0, 2, 4])))]
  13. Out[52]:
  14. AAA BBB CCC
  15. 1 5 20 50
  16. 3 7 40 -50

New columns

Efficiently and dynamically creating new columns using applymap

  1. In [53]: df = pd.DataFrame({'AAA': [1, 2, 1, 3],
  2. ....: 'BBB': [1, 1, 2, 2],
  3. ....: 'CCC': [2, 1, 3, 1]})
  4. ....:
  5. In [54]: df
  6. Out[54]:
  7. AAA BBB CCC
  8. 0 1 1 2
  9. 1 2 1 1
  10. 2 1 2 3
  11. 3 3 2 1
  12. In [55]: source_cols = df.columns # Or some subset would work too
  13. In [56]: new_cols = [str(x) + "_cat" for x in source_cols]
  14. In [57]: categories = {1: 'Alpha', 2: 'Beta', 3: 'Charlie'}
  15. In [58]: df[new_cols] = df[source_cols].applymap(categories.get)
  16. In [59]: df
  17. Out[59]:
  18. AAA BBB CCC AAA_cat BBB_cat CCC_cat
  19. 0 1 1 2 Alpha Alpha Beta
  20. 1 2 1 1 Beta Alpha Alpha
  21. 2 1 2 3 Alpha Beta Charlie
  22. 3 3 2 1 Charlie Beta Alpha

Keep other columns when using min() with groupby

  1. In [60]: df = pd.DataFrame({'AAA': [1, 1, 1, 2, 2, 2, 3, 3],
  2. ....: 'BBB': [2, 1, 3, 4, 5, 1, 2, 3]})
  3. ....:
  4. In [61]: df
  5. Out[61]:
  6. AAA BBB
  7. 0 1 2
  8. 1 1 1
  9. 2 1 3
  10. 3 2 4
  11. 4 2 5
  12. 5 2 1
  13. 6 3 2
  14. 7 3 3

Method 1 : idxmin() to get the index of the minimums

  1. In [62]: df.loc[df.groupby("AAA")["BBB"].idxmin()]
  2. Out[62]:
  3. AAA BBB
  4. 1 1 1
  5. 5 2 1
  6. 6 3 2

Method 2 : sort then take first of each

  1. In [63]: df.sort_values(by="BBB").groupby("AAA", as_index=False).first()
  2. Out[63]:
  3. AAA BBB
  4. 0 1 1
  5. 1 2 1
  6. 2 3 2

Notice the same results, with the exception of the index.

MultiIndexing

The multindexing docs.

Creating a MultiIndex from a labeled frame

  1. In [64]: df = pd.DataFrame({'row': [0, 1, 2],
  2. ....: 'One_X': [1.1, 1.1, 1.1],
  3. ....: 'One_Y': [1.2, 1.2, 1.2],
  4. ....: 'Two_X': [1.11, 1.11, 1.11],
  5. ....: 'Two_Y': [1.22, 1.22, 1.22]})
  6. ....:
  7. In [65]: df
  8. Out[65]:
  9. row One_X One_Y Two_X Two_Y
  10. 0 0 1.1 1.2 1.11 1.22
  11. 1 1 1.1 1.2 1.11 1.22
  12. 2 2 1.1 1.2 1.11 1.22
  13. # As Labelled Index
  14. In [66]: df = df.set_index('row')
  15. In [67]: df
  16. Out[67]:
  17. One_X One_Y Two_X Two_Y
  18. row
  19. 0 1.1 1.2 1.11 1.22
  20. 1 1.1 1.2 1.11 1.22
  21. 2 1.1 1.2 1.11 1.22
  22. # With Hierarchical Columns
  23. In [68]: df.columns = pd.MultiIndex.from_tuples([tuple(c.split('_'))
  24. ....: for c in df.columns])
  25. ....:
  26. In [69]: df
  27. Out[69]:
  28. One Two
  29. X Y X Y
  30. row
  31. 0 1.1 1.2 1.11 1.22
  32. 1 1.1 1.2 1.11 1.22
  33. 2 1.1 1.2 1.11 1.22
  34. # Now stack & Reset
  35. In [70]: df = df.stack(0).reset_index(1)
  36. In [71]: df
  37. Out[71]:
  38. level_1 X Y
  39. row
  40. 0 One 1.10 1.20
  41. 0 Two 1.11 1.22
  42. 1 One 1.10 1.20
  43. 1 Two 1.11 1.22
  44. 2 One 1.10 1.20
  45. 2 Two 1.11 1.22
  46. # And fix the labels (Notice the label 'level_1' got added automatically)
  47. In [72]: df.columns = ['Sample', 'All_X', 'All_Y']
  48. In [73]: df
  49. Out[73]:
  50. Sample All_X All_Y
  51. row
  52. 0 One 1.10 1.20
  53. 0 Two 1.11 1.22
  54. 1 One 1.10 1.20
  55. 1 Two 1.11 1.22
  56. 2 One 1.10 1.20
  57. 2 Two 1.11 1.22

Arithmetic

Performing arithmetic with a MultiIndex that needs broadcasting

  1. In [74]: cols = pd.MultiIndex.from_tuples([(x, y) for x in ['A', 'B', 'C']
  2. ....: for y in ['O', 'I']])
  3. ....:
  4. In [75]: df = pd.DataFrame(np.random.randn(2, 6), index=['n', 'm'], columns=cols)
  5. In [76]: df
  6. Out[76]:
  7. A B C
  8. O I O I O I
  9. n 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215
  10. m 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804
  11. In [77]: df = df.div(df['C'], level=1)
  12. In [78]: df
  13. Out[78]:
  14. A B C
  15. O I O I O I
  16. n 0.387021 1.633022 -1.244983 6.556214 1.0 1.0
  17. m -0.240860 -0.974279 1.741358 -1.963577 1.0 1.0

Slicing

Slicing a MultiIndex with xs

  1. In [79]: coords = [('AA', 'one'), ('AA', 'six'), ('BB', 'one'), ('BB', 'two'),
  2. ....: ('BB', 'six')]
  3. ....:
  4. In [80]: index = pd.MultiIndex.from_tuples(coords)
  5. In [81]: df = pd.DataFrame([11, 22, 33, 44, 55], index, ['MyData'])
  6. In [82]: df
  7. Out[82]:
  8. MyData
  9. AA one 11
  10. six 22
  11. BB one 33
  12. two 44
  13. six 55

To take the cross section of the 1st level and 1st axis the index:

  1. # Note : level and axis are optional, and default to zero
  2. In [83]: df.xs('BB', level=0, axis=0)
  3. Out[83]:
  4. MyData
  5. one 33
  6. two 44
  7. six 55

…and now the 2nd level of the 1st axis.

  1. In [84]: df.xs('six', level=1, axis=0)
  2. Out[84]:
  3. MyData
  4. AA 22
  5. BB 55

Slicing a MultiIndex with xs, method #2

  1. In [85]: import itertools
  2. In [86]: index = list(itertools.product(['Ada', 'Quinn', 'Violet'],
  3. ....: ['Comp', 'Math', 'Sci']))
  4. ....:
  5. In [87]: headr = list(itertools.product(['Exams', 'Labs'], ['I', 'II']))
  6. In [88]: indx = pd.MultiIndex.from_tuples(index, names=['Student', 'Course'])
  7. In [89]: cols = pd.MultiIndex.from_tuples(headr) # Notice these are un-named
  8. In [90]: data = [[70 + x + y + (x * y) % 3 for x in range(4)] for y in range(9)]
  9. In [91]: df = pd.DataFrame(data, indx, cols)
  10. In [92]: df
  11. Out[92]:
  12. Exams Labs
  13. I II I II
  14. Student Course
  15. Ada Comp 70 71 72 73
  16. Math 71 73 75 74
  17. Sci 72 75 75 75
  18. Quinn Comp 73 74 75 76
  19. Math 74 76 78 77
  20. Sci 75 78 78 78
  21. Violet Comp 76 77 78 79
  22. Math 77 79 81 80
  23. Sci 78 81 81 81
  24. In [93]: All = slice(None)
  25. In [94]: df.loc['Violet']
  26. Out[94]:
  27. Exams Labs
  28. I II I II
  29. Course
  30. Comp 76 77 78 79
  31. Math 77 79 81 80
  32. Sci 78 81 81 81
  33. In [95]: df.loc[(All, 'Math'), All]
  34. Out[95]:
  35. Exams Labs
  36. I II I II
  37. Student Course
  38. Ada Math 71 73 75 74
  39. Quinn Math 74 76 78 77
  40. Violet Math 77 79 81 80
  41. In [96]: df.loc[(slice('Ada', 'Quinn'), 'Math'), All]
  42. Out[96]:
  43. Exams Labs
  44. I II I II
  45. Student Course
  46. Ada Math 71 73 75 74
  47. Quinn Math 74 76 78 77
  48. In [97]: df.loc[(All, 'Math'), ('Exams')]
  49. Out[97]:
  50. I II
  51. Student Course
  52. Ada Math 71 73
  53. Quinn Math 74 76
  54. Violet Math 77 79
  55. In [98]: df.loc[(All, 'Math'), (All, 'II')]
  56. Out[98]:
  57. Exams Labs
  58. II II
  59. Student Course
  60. Ada Math 73 74
  61. Quinn Math 76 77
  62. Violet Math 79 80

Setting portions of a MultiIndex with xs

Sorting

Sort by specific column or an ordered list of columns, with a MultiIndex

  1. In [99]: df.sort_values(by=('Labs', 'II'), ascending=False)
  2. Out[99]:
  3. Exams Labs
  4. I II I II
  5. Student Course
  6. Violet Sci 78 81 81 81
  7. Math 77 79 81 80
  8. Comp 76 77 78 79
  9. Quinn Sci 75 78 78 78
  10. Math 74 76 78 77
  11. Comp 73 74 75 76
  12. Ada Sci 72 75 75 75
  13. Math 71 73 75 74
  14. Comp 70 71 72 73

Partial selection, the need for sortedness;

Levels

Prepending a level to a multiindex

Flatten Hierarchical columns

Missing data

The missing data docs.

Fill forward a reversed timeseries

  1. In [100]: df = pd.DataFrame(np.random.randn(6, 1),
  2. .....: index=pd.date_range('2013-08-01', periods=6, freq='B'),
  3. .....: columns=list('A'))
  4. .....:
  5. In [101]: df.loc[df.index[3], 'A'] = np.nan
  6. In [102]: df
  7. Out[102]:
  8. A
  9. 2013-08-01 0.721555
  10. 2013-08-02 -0.706771
  11. 2013-08-05 -1.039575
  12. 2013-08-06 NaN
  13. 2013-08-07 -0.424972
  14. 2013-08-08 0.567020
  15. In [103]: df.reindex(df.index[::-1]).ffill()
  16. Out[103]:
  17. A
  18. 2013-08-08 0.567020
  19. 2013-08-07 -0.424972
  20. 2013-08-06 -0.424972
  21. 2013-08-05 -1.039575
  22. 2013-08-02 -0.706771
  23. 2013-08-01 0.721555

cumsum reset at NaN values

Replace

Using replace with backrefs

Grouping

The grouping docs.

Basic grouping with apply

Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns

  1. In [104]: df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
  2. .....: 'size': list('SSMMMLL'),
  3. .....: 'weight': [8, 10, 11, 1, 20, 12, 12],
  4. .....: 'adult': [False] * 5 + [True] * 2})
  5. .....:
  6. In [105]: df
  7. Out[105]:
  8. animal size weight adult
  9. 0 cat S 8 False
  10. 1 dog S 10 False
  11. 2 cat M 11 False
  12. 3 fish M 1 False
  13. 4 dog M 20 False
  14. 5 cat L 12 True
  15. 6 cat L 12 True
  16. # List the size of the animals with the highest weight.
  17. In [106]: df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
  18. Out[106]:
  19. animal
  20. cat L
  21. dog M
  22. fish M
  23. dtype: object

Using get_group

  1. In [107]: gb = df.groupby(['animal'])
  2. In [108]: gb.get_group('cat')
  3. Out[108]:
  4. animal size weight adult
  5. 0 cat S 8 False
  6. 2 cat M 11 False
  7. 5 cat L 12 True
  8. 6 cat L 12 True

Apply to different items in a group

  1. In [109]: def GrowUp(x):
  2. .....: avg_weight = sum(x[x['size'] == 'S'].weight * 1.5)
  3. .....: avg_weight += sum(x[x['size'] == 'M'].weight * 1.25)
  4. .....: avg_weight += sum(x[x['size'] == 'L'].weight)
  5. .....: avg_weight /= len(x)
  6. .....: return pd.Series(['L', avg_weight, True],
  7. .....: index=['size', 'weight', 'adult'])
  8. .....:
  9. In [110]: expected_df = gb.apply(GrowUp)
  10. In [111]: expected_df
  11. Out[111]:
  12. size weight adult
  13. animal
  14. cat L 12.4375 True
  15. dog L 20.0000 True
  16. fish L 1.2500 True

Expanding apply

  1. In [112]: S = pd.Series([i / 100.0 for i in range(1, 11)])
  2. In [113]: def cum_ret(x, y):
  3. .....: return x * (1 + y)
  4. .....:
  5. In [114]: def red(x):
  6. .....: return functools.reduce(cum_ret, x, 1.0)
  7. .....:
  8. In [115]: S.expanding().apply(red, raw=True)
  9. Out[115]:
  10. 0 1.010000
  11. 1 1.030200
  12. 2 1.061106
  13. 3 1.103550
  14. 4 1.158728
  15. 5 1.228251
  16. 6 1.314229
  17. 7 1.419367
  18. 8 1.547110
  19. 9 1.701821
  20. dtype: float64

Replacing some values with mean of the rest of a group

  1. In [116]: df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, -1, 1, 2]})
  2. In [117]: gb = df.groupby('A')
  3. In [118]: def replace(g):
  4. .....: mask = g < 0
  5. .....: return g.where(mask, g[~mask].mean())
  6. .....:
  7. In [119]: gb.transform(replace)
  8. Out[119]:
  9. B
  10. 0 1.0
  11. 1 -1.0
  12. 2 1.5
  13. 3 1.5

Sort groups by aggregated data

  1. In [120]: df = pd.DataFrame({'code': ['foo', 'bar', 'baz'] * 2,
  2. .....: 'data': [0.16, -0.21, 0.33, 0.45, -0.59, 0.62],
  3. .....: 'flag': [False, True] * 3})
  4. .....:
  5. In [121]: code_groups = df.groupby('code')
  6. In [122]: agg_n_sort_order = code_groups[['data']].transform(sum).sort_values(by='data')
  7. In [123]: sorted_df = df.loc[agg_n_sort_order.index]
  8. In [124]: sorted_df
  9. Out[124]:
  10. code data flag
  11. 1 bar -0.21 True
  12. 4 bar -0.59 False
  13. 0 foo 0.16 False
  14. 3 foo 0.45 True
  15. 2 baz 0.33 False
  16. 5 baz 0.62 True

Create multiple aggregated columns

  1. In [125]: rng = pd.date_range(start="2014-10-07", periods=10, freq='2min')
  2. In [126]: ts = pd.Series(data=list(range(10)), index=rng)
  3. In [127]: def MyCust(x):
  4. .....: if len(x) > 2:
  5. .....: return x[1] * 1.234
  6. .....: return pd.NaT
  7. .....:
  8. In [128]: mhc = {'Mean': np.mean, 'Max': np.max, 'Custom': MyCust}
  9. In [129]: ts.resample("5min").apply(mhc)
  10. Out[129]:
  11. Mean 2014-10-07 00:00:00 1
  12. 2014-10-07 00:05:00 3.5
  13. 2014-10-07 00:10:00 6
  14. 2014-10-07 00:15:00 8.5
  15. Max 2014-10-07 00:00:00 2
  16. 2014-10-07 00:05:00 4
  17. 2014-10-07 00:10:00 7
  18. 2014-10-07 00:15:00 9
  19. Custom 2014-10-07 00:00:00 1.234
  20. 2014-10-07 00:05:00 NaT
  21. 2014-10-07 00:10:00 7.404
  22. 2014-10-07 00:15:00 NaT
  23. dtype: object
  24. In [130]: ts
  25. Out[130]:
  26. 2014-10-07 00:00:00 0
  27. 2014-10-07 00:02:00 1
  28. 2014-10-07 00:04:00 2
  29. 2014-10-07 00:06:00 3
  30. 2014-10-07 00:08:00 4
  31. 2014-10-07 00:10:00 5
  32. 2014-10-07 00:12:00 6
  33. 2014-10-07 00:14:00 7
  34. 2014-10-07 00:16:00 8
  35. 2014-10-07 00:18:00 9
  36. Freq: 2T, dtype: int64

Create a value counts column and reassign back to the DataFrame

  1. In [131]: df = pd.DataFrame({'Color': 'Red Red Red Blue'.split(),
  2. .....: 'Value': [100, 150, 50, 50]})
  3. .....:
  4. In [132]: df
  5. Out[132]:
  6. Color Value
  7. 0 Red 100
  8. 1 Red 150
  9. 2 Red 50
  10. 3 Blue 50
  11. In [133]: df['Counts'] = df.groupby(['Color']).transform(len)
  12. In [134]: df
  13. Out[134]:
  14. Color Value Counts
  15. 0 Red 100 3
  16. 1 Red 150 3
  17. 2 Red 50 3
  18. 3 Blue 50 1

Shift groups of the values in a column based on the index

  1. In [135]: df = pd.DataFrame({'line_race': [10, 10, 8, 10, 10, 8],
  2. .....: 'beyer': [99, 102, 103, 103, 88, 100]},
  3. .....: index=['Last Gunfighter', 'Last Gunfighter',
  4. .....: 'Last Gunfighter', 'Paynter', 'Paynter',
  5. .....: 'Paynter'])
  6. .....:
  7. In [136]: df
  8. Out[136]:
  9. line_race beyer
  10. Last Gunfighter 10 99
  11. Last Gunfighter 10 102
  12. Last Gunfighter 8 103
  13. Paynter 10 103
  14. Paynter 10 88
  15. Paynter 8 100
  16. In [137]: df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1)
  17. In [138]: df
  18. Out[138]:
  19. line_race beyer beyer_shifted
  20. Last Gunfighter 10 99 NaN
  21. Last Gunfighter 10 102 99.0
  22. Last Gunfighter 8 103 102.0
  23. Paynter 10 103 NaN
  24. Paynter 10 88 103.0
  25. Paynter 8 100 88.0

Select row with maximum value from each group

  1. In [139]: df = pd.DataFrame({'host': ['other', 'other', 'that', 'this', 'this'],
  2. .....: 'service': ['mail', 'web', 'mail', 'mail', 'web'],
  3. .....: 'no': [1, 2, 1, 2, 1]}).set_index(['host', 'service'])
  4. .....:
  5. In [140]: mask = df.groupby(level=0).agg('idxmax')
  6. In [141]: df_count = df.loc[mask['no']].reset_index()
  7. In [142]: df_count
  8. Out[142]:
  9. host service no
  10. 0 other web 2
  11. 1 that mail 1
  12. 2 this mail 2

Grouping like Python’s itertools.groupby

  1. In [143]: df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=['A'])
  2. In [144]: df.A.groupby((df.A != df.A.shift()).cumsum()).groups
  3. Out[144]:
  4. {1: Int64Index([0], dtype='int64'),
  5. 2: Int64Index([1], dtype='int64'),
  6. 3: Int64Index([2], dtype='int64'),
  7. 4: Int64Index([3, 4, 5], dtype='int64'),
  8. 5: Int64Index([6], dtype='int64'),
  9. 6: Int64Index([7, 8], dtype='int64')}
  10. In [145]: df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()
  11. Out[145]:
  12. 0 0
  13. 1 1
  14. 2 0
  15. 3 1
  16. 4 2
  17. 5 3
  18. 6 0
  19. 7 1
  20. 8 2
  21. Name: A, dtype: int64

Expanding data

Alignment and to-date

Rolling Computation window based on values instead of counts

Rolling Mean by Time Interval

Splitting

Splitting a frame

Create a list of dataframes, split using a delineation based on logic included in rows.

  1. In [146]: df = pd.DataFrame(data={'Case': ['A', 'A', 'A', 'B', 'A', 'A', 'B', 'A',
  2. .....: 'A'],
  3. .....: 'Data': np.random.randn(9)})
  4. .....:
  5. In [147]: dfs = list(zip(*df.groupby((1 * (df['Case'] == 'B')).cumsum()
  6. .....: .rolling(window=3, min_periods=1).median())))[-1]
  7. .....:
  8. In [148]: dfs[0]
  9. Out[148]:
  10. Case Data
  11. 0 A 0.276232
  12. 1 A -1.087401
  13. 2 A -0.673690
  14. 3 B 0.113648
  15. In [149]: dfs[1]
  16. Out[149]:
  17. Case Data
  18. 4 A -1.478427
  19. 5 A 0.524988
  20. 6 B 0.404705
  21. In [150]: dfs[2]
  22. Out[150]:
  23. Case Data
  24. 7 A 0.577046
  25. 8 A -1.715002

Pivot

The Pivot docs.

Partial sums and subtotals

  1. In [151]: df = pd.DataFrame(data={'Province': ['ON', 'QC', 'BC', 'AL', 'AL', 'MN', 'ON'],
  2. .....: 'City': ['Toronto', 'Montreal', 'Vancouver',
  3. .....: 'Calgary', 'Edmonton', 'Winnipeg',
  4. .....: 'Windsor'],
  5. .....: 'Sales': [13, 6, 16, 8, 4, 3, 1]})
  6. .....:
  7. In [152]: table = pd.pivot_table(df, values=['Sales'], index=['Province'],
  8. .....: columns=['City'], aggfunc=np.sum, margins=True)
  9. .....:
  10. In [153]: table.stack('City')
  11. Out[153]:
  12. Sales
  13. Province City
  14. AL All 12.0
  15. Calgary 8.0
  16. Edmonton 4.0
  17. BC All 16.0
  18. Vancouver 16.0
  19. ... ...
  20. All Montreal 6.0
  21. Toronto 13.0
  22. Vancouver 16.0
  23. Windsor 1.0
  24. Winnipeg 3.0
  25. [20 rows x 1 columns]

Frequency table like plyr in R

  1. In [154]: grades = [48, 99, 75, 80, 42, 80, 72, 68, 36, 78]
  2. In [155]: df = pd.DataFrame({'ID': ["x%d" % r for r in range(10)],
  3. .....: 'Gender': ['F', 'M', 'F', 'M', 'F',
  4. .....: 'M', 'F', 'M', 'M', 'M'],
  5. .....: 'ExamYear': ['2007', '2007', '2007', '2008', '2008',
  6. .....: '2008', '2008', '2009', '2009', '2009'],
  7. .....: 'Class': ['algebra', 'stats', 'bio', 'algebra',
  8. .....: 'algebra', 'stats', 'stats', 'algebra',
  9. .....: 'bio', 'bio'],
  10. .....: 'Participated': ['yes', 'yes', 'yes', 'yes', 'no',
  11. .....: 'yes', 'yes', 'yes', 'yes', 'yes'],
  12. .....: 'Passed': ['yes' if x > 50 else 'no' for x in grades],
  13. .....: 'Employed': [True, True, True, False,
  14. .....: False, False, False, True, True, False],
  15. .....: 'Grade': grades})
  16. .....:
  17. In [156]: df.groupby('ExamYear').agg({'Participated': lambda x: x.value_counts()['yes'],
  18. .....: 'Passed': lambda x: sum(x == 'yes'),
  19. .....: 'Employed': lambda x: sum(x),
  20. .....: 'Grade': lambda x: sum(x) / len(x)})
  21. .....:
  22. Out[156]:
  23. Participated Passed Employed Grade
  24. ExamYear
  25. 2007 3 2 3 74.000000
  26. 2008 3 3 0 68.500000
  27. 2009 3 2 2 60.666667

Plot pandas DataFrame with year over year data

To create year and month cross tabulation:

  1. In [157]: df = pd.DataFrame({'value': np.random.randn(36)},
  2. .....: index=pd.date_range('2011-01-01', freq='M', periods=36))
  3. .....:
  4. In [158]: pd.pivot_table(df, index=df.index.month, columns=df.index.year,
  5. .....: values='value', aggfunc='sum')
  6. .....:
  7. Out[158]:
  8. 2011 2012 2013
  9. 1 -1.039268 -0.968914 2.565646
  10. 2 -0.370647 -1.294524 1.431256
  11. 3 -1.157892 0.413738 1.340309
  12. 4 -1.344312 0.276662 -1.170299
  13. 5 0.844885 -0.472035 -0.226169
  14. 6 1.075770 -0.013960 0.410835
  15. 7 -0.109050 -0.362543 0.813850
  16. 8 1.643563 -0.006154 0.132003
  17. 9 -1.469388 -0.923061 -0.827317
  18. 10 0.357021 0.895717 -0.076467
  19. 11 -0.674600 0.805244 -1.187678
  20. 12 -1.776904 -1.206412 1.130127

Apply

Rolling apply to organize - Turning embedded lists into a MultiIndex frame

  1. In [159]: df = pd.DataFrame(data={'A': [[2, 4, 8, 16], [100, 200], [10, 20, 30]],
  2. .....: 'B': [['a', 'b', 'c'], ['jj', 'kk'], ['ccc']]},
  3. .....: index=['I', 'II', 'III'])
  4. .....:
  5. In [160]: def SeriesFromSubList(aList):
  6. .....: return pd.Series(aList)
  7. .....:
  8. In [161]: df_orgz = pd.concat({ind: row.apply(SeriesFromSubList)
  9. .....: for ind, row in df.iterrows()})
  10. .....:
  11. In [162]: df_orgz
  12. Out[162]:
  13. 0 1 2 3
  14. I A 2 4 8 16.0
  15. B a b c NaN
  16. II A 100 200 NaN NaN
  17. B jj kk NaN NaN
  18. III A 10 20 30 NaN
  19. B ccc NaN NaN NaN

Rolling apply with a DataFrame returning a Series

Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned

  1. In [163]: df = pd.DataFrame(data=np.random.randn(2000, 2) / 10000,
  2. .....: index=pd.date_range('2001-01-01', periods=2000),
  3. .....: columns=['A', 'B'])
  4. .....:
  5. In [164]: df
  6. Out[164]:
  7. A B
  8. 2001-01-01 -0.000144 -0.000141
  9. 2001-01-02 0.000161 0.000102
  10. 2001-01-03 0.000057 0.000088
  11. 2001-01-04 -0.000221 0.000097
  12. 2001-01-05 -0.000201 -0.000041
  13. ... ... ...
  14. 2006-06-19 0.000040 -0.000235
  15. 2006-06-20 -0.000123 -0.000021
  16. 2006-06-21 -0.000113 0.000114
  17. 2006-06-22 0.000136 0.000109
  18. 2006-06-23 0.000027 0.000030
  19. [2000 rows x 2 columns]
  20. In [165]: def gm(df, const):
  21. .....: v = ((((df.A + df.B) + 1).cumprod()) - 1) * const
  22. .....: return v.iloc[-1]
  23. .....:
  24. In [166]: s = pd.Series({df.index[i]: gm(df.iloc[i:min(i + 51, len(df) - 1)], 5)
  25. .....: for i in range(len(df) - 50)})
  26. .....:
  27. In [167]: s
  28. Out[167]:
  29. 2001-01-01 0.000930
  30. 2001-01-02 0.002615
  31. 2001-01-03 0.001281
  32. 2001-01-04 0.001117
  33. 2001-01-05 0.002772
  34. ...
  35. 2006-04-30 0.003296
  36. 2006-05-01 0.002629
  37. 2006-05-02 0.002081
  38. 2006-05-03 0.004247
  39. 2006-05-04 0.003928
  40. Length: 1950, dtype: float64

Rolling apply with a DataFrame returning a Scalar

Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)

  1. In [168]: rng = pd.date_range(start='2014-01-01', periods=100)
  2. In [169]: df = pd.DataFrame({'Open': np.random.randn(len(rng)),
  3. .....: 'Close': np.random.randn(len(rng)),
  4. .....: 'Volume': np.random.randint(100, 2000, len(rng))},
  5. .....: index=rng)
  6. .....:
  7. In [170]: df
  8. Out[170]:
  9. Open Close Volume
  10. 2014-01-01 -1.611353 -0.492885 1219
  11. 2014-01-02 -3.000951 0.445794 1054
  12. 2014-01-03 -0.138359 -0.076081 1381
  13. 2014-01-04 0.301568 1.198259 1253
  14. 2014-01-05 0.276381 -0.669831 1728
  15. ... ... ... ...
  16. 2014-04-06 -0.040338 0.937843 1188
  17. 2014-04-07 0.359661 -0.285908 1864
  18. 2014-04-08 0.060978 1.714814 941
  19. 2014-04-09 1.759055 -0.455942 1065
  20. 2014-04-10 0.138185 -1.147008 1453
  21. [100 rows x 3 columns]
  22. In [171]: def vwap(bars):
  23. .....: return ((bars.Close * bars.Volume).sum() / bars.Volume.sum())
  24. .....:
  25. In [172]: window = 5
  26. In [173]: s = pd.concat([(pd.Series(vwap(df.iloc[i:i + window]),
  27. .....: index=[df.index[i + window]]))
  28. .....: for i in range(len(df) - window)])
  29. .....:
  30. In [174]: s.round(2)
  31. Out[174]:
  32. 2014-01-06 0.02
  33. 2014-01-07 0.11
  34. 2014-01-08 0.10
  35. 2014-01-09 0.07
  36. 2014-01-10 -0.29
  37. ...
  38. 2014-04-06 -0.63
  39. 2014-04-07 -0.02
  40. 2014-04-08 -0.03
  41. 2014-04-09 0.34
  42. 2014-04-10 0.29
  43. Length: 95, dtype: float64

Timeseries

Between times

Using indexer between time

Constructing a datetime range that excludes weekends and includes only certain times

Vectorized Lookup

Aggregation and plotting time series

Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series. How to rearrange a Python pandas DataFrame?

Dealing with duplicates when reindexing a timeseries to a specified frequency

Calculate the first day of the month for each entry in a DatetimeIndex

  1. In [175]: dates = pd.date_range('2000-01-01', periods=5)
  2. In [176]: dates.to_period(freq='M').to_timestamp()
  3. Out[176]:
  4. DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',
  5. '2000-01-01'],
  6. dtype='datetime64[ns]', freq=None)

Resampling

The Resample docs.

Using Grouper instead of TimeGrouper for time grouping of values

Time grouping with some missing values

Valid frequency arguments to Grouper

Grouping using a MultiIndex

Using TimeGrouper and another grouping to create subgroups, then apply a custom function

Resampling with custom periods

Resample intraday frame without adding new days

Resample minute data

Resample with groupby

Merge

The Concat docs. The Join docs.

Append two dataframes with overlapping index (emulate R rbind)

  1. In [177]: rng = pd.date_range('2000-01-01', periods=6)
  2. In [178]: df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C'])
  3. In [179]: df2 = df1.copy()

Depending on df construction, ignore_index may be needed

  1. In [180]: df = df1.append(df2, ignore_index=True)
  2. In [181]: df
  3. Out[181]:
  4. A B C
  5. 0 -0.870117 -0.479265 -0.790855
  6. 1 0.144817 1.726395 -0.464535
  7. 2 -0.821906 1.597605 0.187307
  8. 3 -0.128342 -1.511638 -0.289858
  9. 4 0.399194 -1.430030 -0.639760
  10. 5 1.115116 -2.012600 1.810662
  11. 6 -0.870117 -0.479265 -0.790855
  12. 7 0.144817 1.726395 -0.464535
  13. 8 -0.821906 1.597605 0.187307
  14. 9 -0.128342 -1.511638 -0.289858
  15. 10 0.399194 -1.430030 -0.639760
  16. 11 1.115116 -2.012600 1.810662

Self Join of a DataFrame

  1. In [182]: df = pd.DataFrame(data={'Area': ['A'] * 5 + ['C'] * 2,
  2. .....: 'Bins': [110] * 2 + [160] * 3 + [40] * 2,
  3. .....: 'Test_0': [0, 1, 0, 1, 2, 0, 1],
  4. .....: 'Data': np.random.randn(7)})
  5. .....:
  6. In [183]: df
  7. Out[183]:
  8. Area Bins Test_0 Data
  9. 0 A 110 0 -0.433937
  10. 1 A 110 1 -0.160552
  11. 2 A 160 0 0.744434
  12. 3 A 160 1 1.754213
  13. 4 A 160 2 0.000850
  14. 5 C 40 0 0.342243
  15. 6 C 40 1 1.070599
  16. In [184]: df['Test_1'] = df['Test_0'] - 1
  17. In [185]: pd.merge(df, df, left_on=['Bins', 'Area', 'Test_0'],
  18. .....: right_on=['Bins', 'Area', 'Test_1'],
  19. .....: suffixes=('_L', '_R'))
  20. .....:
  21. Out[185]:
  22. Area Bins Test_0_L Data_L Test_1_L Test_0_R Data_R Test_1_R
  23. 0 A 110 0 -0.433937 -1 1 -0.160552 0
  24. 1 A 160 0 0.744434 -1 1 1.754213 0
  25. 2 A 160 1 1.754213 0 2 0.000850 1
  26. 3 C 40 0 0.342243 -1 1 1.070599 0

How to set the index and join

KDB like asof join

Join with a criteria based on the values

Using searchsorted to merge based on values inside a range

Plotting

The Plotting docs.

Make Matplotlib look like R

Setting x-axis major and minor labels

Plotting multiple charts in an ipython notebook

Creating a multi-line plot

Plotting a heatmap

Annotate a time-series plot

Annotate a time-series plot #2

Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter

Boxplot for each quartile of a stratifying variable

  1. In [186]: df = pd.DataFrame(
  2. .....: {'stratifying_var': np.random.uniform(0, 100, 20),
  3. .....: 'price': np.random.normal(100, 5, 20)})
  4. .....:
  5. In [187]: df['quartiles'] = pd.qcut(
  6. .....: df['stratifying_var'],
  7. .....: 4,
  8. .....: labels=['0-25%', '25-50%', '50-75%', '75-100%'])
  9. .....:
  10. In [188]: df.boxplot(column='price', by='quartiles')
  11. Out[188]: <matplotlib.axes._subplots.AxesSubplot at 0x7f65f77e6470>

quartile_boxplot

Data In/Out

Performance comparison of SQL vs HDF5

CSV

The CSV docs

read_csv in action

appending to a csv

Reading a csv chunk-by-chunk

Reading only certain rows of a csv chunk-by-chunk

Reading the first few lines of a frame

Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands). This example shows a WinZipped file, but is a general application of opening the file within a context manager and using that handle to read. See here

Inferring dtypes from a file

Dealing with bad lines

Dealing with bad lines II

Reading CSV with Unix timestamps and converting to local timezone

Write a multi-row index CSV without writing duplicates

Reading multiple files to create a single DataFrame

The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of the individual frames into a list, and then combine the frames in the list using pd.concat():

  1. In [189]: for i in range(3):
  2. .....: data = pd.DataFrame(np.random.randn(10, 4))
  3. .....: data.to_csv('file_{}.csv'.format(i))
  4. .....:
  5. In [190]: files = ['file_0.csv', 'file_1.csv', 'file_2.csv']
  6. In [191]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)

You can use the same approach to read all files matching a pattern. Here is an example using glob:

  1. In [192]: import glob
  2. In [193]: import os
  3. In [194]: files = glob.glob('file_*.csv')
  4. In [195]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)

Finally, this strategy will work with the other pd.read_*(...) functions described in the io docs.

Parsing date components in multi-columns

Parsing date components in multi-columns is faster with a format

  1. In [196]: i = pd.date_range('20000101', periods=10000)
  2. In [197]: df = pd.DataFrame({'year': i.year, 'month': i.month, 'day': i.day})
  3. In [198]: df.head()
  4. Out[198]:
  5. year month day
  6. 0 2000 1 1
  7. 1 2000 1 2
  8. 2 2000 1 3
  9. 3 2000 1 4
  10. 4 2000 1 5
  11. In [199]: %timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d')
  12. .....: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],
  13. .....: x['month'], x['day']), axis=1)
  14. .....: ds.head()
  15. .....: %timeit pd.to_datetime(ds)
  16. .....:
  17. 9.36 ms +- 106 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
  18. 2.88 ms +- 34.5 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

Skip row between header and data

  1. In [200]: data = """;;;;
  2. .....: ;;;;
  3. .....: ;;;;
  4. .....: ;;;;
  5. .....: ;;;;
  6. .....: ;;;;
  7. .....: ;;;;
  8. .....: ;;;;
  9. .....: ;;;;
  10. .....: ;;;;
  11. .....: date;Param1;Param2;Param4;Param5
  12. .....: ;m²;°C;m²;m
  13. .....: ;;;;
  14. .....: 01.01.1990 00:00;1;1;2;3
  15. .....: 01.01.1990 01:00;5;3;4;5
  16. .....: 01.01.1990 02:00;9;5;6;7
  17. .....: 01.01.1990 03:00;13;7;8;9
  18. .....: 01.01.1990 04:00;17;9;10;11
  19. .....: 01.01.1990 05:00;21;11;12;13
  20. .....: """
  21. .....:
Option 1: pass rows explicitly to skip rows
  1. In [201]: from io import StringIO
  2. In [202]: pd.read_csv(StringIO(data), sep=';', skiprows=[11, 12],
  3. .....: index_col=0, parse_dates=True, header=10)
  4. .....:
  5. Out[202]:
  6. Param1 Param2 Param4 Param5
  7. date
  8. 1990-01-01 00:00:00 1 1 2 3
  9. 1990-01-01 01:00:00 5 3 4 5
  10. 1990-01-01 02:00:00 9 5 6 7
  11. 1990-01-01 03:00:00 13 7 8 9
  12. 1990-01-01 04:00:00 17 9 10 11
  13. 1990-01-01 05:00:00 21 11 12 13
Option 2: read column names and then data
  1. In [203]: pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
  2. Out[203]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')
  3. In [204]: columns = pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
  4. In [205]: pd.read_csv(StringIO(data), sep=';', index_col=0,
  5. .....: header=12, parse_dates=True, names=columns)
  6. .....:
  7. Out[205]:
  8. Param1 Param2 Param4 Param5
  9. date
  10. 1990-01-01 00:00:00 1 1 2 3
  11. 1990-01-01 01:00:00 5 3 4 5
  12. 1990-01-01 02:00:00 9 5 6 7
  13. 1990-01-01 03:00:00 13 7 8 9
  14. 1990-01-01 04:00:00 17 9 10 11
  15. 1990-01-01 05:00:00 21 11 12 13

SQL

The SQL docs

Reading from databases with SQL

Excel

The Excel docs

Reading from a filelike handle

Modifying formatting in XlsxWriter output

HTML

Reading HTML tables from a server that cannot handle the default request header

HDFStore

The HDFStores docs

Simple queries with a Timestamp Index

Managing heterogeneous data using a linked multiple table hierarchy

Merging on-disk tables with millions of rows

Avoiding inconsistencies when writing to a store from multiple processes/threads

De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here

Creating a store chunk-by-chunk from a csv file

Appending to a store, while creating a unique index

Large Data work flows

Reading in a sequence of files, then providing a global unique index to a store while appending

Groupby on a HDFStore with low group density

Groupby on a HDFStore with high group density

Hierarchical queries on a HDFStore

Counting with a HDFStore

Troubleshoot HDFStore exceptions

Setting min_itemsize with strings

Using ptrepack to create a completely-sorted-index on a store

Storing Attributes to a group node

  1. In [206]: df = pd.DataFrame(np.random.randn(8, 3))
  2. In [207]: store = pd.HDFStore('test.h5')
  3. In [208]: store.put('df', df)
  4. # you can store an arbitrary Python object via pickle
  5. In [209]: store.get_storer('df').attrs.my_attribute = {'A': 10}
  6. In [210]: store.get_storer('df').attrs.my_attribute
  7. Out[210]: {'A': 10}

Binary files

pandas readily accepts NumPy record arrays, if you need to read in a binary file consisting of an array of C structs. For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit machine,

  1. #include <stdio.h>
  2. #include <stdint.h>
  3. typedef struct _Data
  4. {
  5. int32_t count;
  6. double avg;
  7. float scale;
  8. } Data;
  9. int main(int argc, const char *argv[])
  10. {
  11. size_t n = 10;
  12. Data d[n];
  13. for (int i = 0; i < n; ++i)
  14. {
  15. d[i].count = i;
  16. d[i].avg = i + 1.0;
  17. d[i].scale = (float) i + 2.0f;
  18. }
  19. FILE *file = fopen("binary.dat", "wb");
  20. fwrite(&d, sizeof(Data), n, file);
  21. fclose(file);
  22. return 0;
  23. }

the following Python code will read the binary file 'binary.dat' into a pandas DataFrame, where each element of the struct corresponds to a column in the frame:

  1. names = 'count', 'avg', 'scale'
  2. # note that the offsets are larger than the size of the type because of
  3. # struct padding
  4. offsets = 0, 8, 16
  5. formats = 'i4', 'f8', 'f4'
  6. dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
  7. align=True)
  8. df = pd.DataFrame(np.fromfile('binary.dat', dt))

::: tip Note

The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas’ IO facilities.

:::

Computation

Numerical integration (sample-based) of a time series

Correlation

Often it’s useful to obtain the lower (or upper) triangular form of a correlation matrix calculated from DataFrame.corr(). This can be achieved by passing a boolean mask to where as follows:

  1. In [211]: df = pd.DataFrame(np.random.random(size=(100, 5)))
  2. In [212]: corr_mat = df.corr()
  3. In [213]: mask = np.tril(np.ones_like(corr_mat, dtype=np.bool), k=-1)
  4. In [214]: corr_mat.where(mask)
  5. Out[214]:
  6. 0 1 2 3 4
  7. 0 NaN NaN NaN NaN NaN
  8. 1 -0.018923 NaN NaN NaN NaN
  9. 2 -0.076296 -0.012464 NaN NaN NaN
  10. 3 -0.169941 -0.289416 0.076462 NaN NaN
  11. 4 0.064326 0.018759 -0.084140 -0.079859 NaN

The method argument within DataFrame.corr can accept a callable in addition to the named correlation types. Here we compute the distance correlation matrix for a DataFrame object.

  1. In [215]: def distcorr(x, y):
  2. .....: n = len(x)
  3. .....: a = np.zeros(shape=(n, n))
  4. .....: b = np.zeros(shape=(n, n))
  5. .....: for i in range(n):
  6. .....: for j in range(i + 1, n):
  7. .....: a[i, j] = abs(x[i] - x[j])
  8. .....: b[i, j] = abs(y[i] - y[j])
  9. .....: a += a.T
  10. .....: b += b.T
  11. .....: a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
  12. .....: b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
  13. .....: A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
  14. .....: B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
  15. .....: cov_ab = np.sqrt(np.nansum(A * B)) / n
  16. .....: std_a = np.sqrt(np.sqrt(np.nansum(A**2)) / n)
  17. .....: std_b = np.sqrt(np.sqrt(np.nansum(B**2)) / n)
  18. .....: return cov_ab / std_a / std_b
  19. .....:
  20. In [216]: df = pd.DataFrame(np.random.normal(size=(100, 3)))
  21. In [217]: df.corr(method=distcorr)
  22. Out[217]:
  23. 0 1 2
  24. 0 1.000000 0.199653 0.214871
  25. 1 0.199653 1.000000 0.195116
  26. 2 0.214871 0.195116 1.000000

Timedeltas

The Timedeltas docs.

Using timedeltas

  1. In [218]: import datetime
  2. In [219]: s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
  3. In [220]: s - s.max()
  4. Out[220]:
  5. 0 -2 days
  6. 1 -1 days
  7. 2 0 days
  8. dtype: timedelta64[ns]
  9. In [221]: s.max() - s
  10. Out[221]:
  11. 0 2 days
  12. 1 1 days
  13. 2 0 days
  14. dtype: timedelta64[ns]
  15. In [222]: s - datetime.datetime(2011, 1, 1, 3, 5)
  16. Out[222]:
  17. 0 364 days 20:55:00
  18. 1 365 days 20:55:00
  19. 2 366 days 20:55:00
  20. dtype: timedelta64[ns]
  21. In [223]: s + datetime.timedelta(minutes=5)
  22. Out[223]:
  23. 0 2012-01-01 00:05:00
  24. 1 2012-01-02 00:05:00
  25. 2 2012-01-03 00:05:00
  26. dtype: datetime64[ns]
  27. In [224]: datetime.datetime(2011, 1, 1, 3, 5) - s
  28. Out[224]:
  29. 0 -365 days +03:05:00
  30. 1 -366 days +03:05:00
  31. 2 -367 days +03:05:00
  32. dtype: timedelta64[ns]
  33. In [225]: datetime.timedelta(minutes=5) + s
  34. Out[225]:
  35. 0 2012-01-01 00:05:00
  36. 1 2012-01-02 00:05:00
  37. 2 2012-01-03 00:05:00
  38. dtype: datetime64[ns]

Adding and subtracting deltas and dates

  1. In [226]: deltas = pd.Series([datetime.timedelta(days=i) for i in range(3)])
  2. In [227]: df = pd.DataFrame({'A': s, 'B': deltas})
  3. In [228]: df
  4. Out[228]:
  5. A B
  6. 0 2012-01-01 0 days
  7. 1 2012-01-02 1 days
  8. 2 2012-01-03 2 days
  9. In [229]: df['New Dates'] = df['A'] + df['B']
  10. In [230]: df['Delta'] = df['A'] - df['New Dates']
  11. In [231]: df
  12. Out[231]:
  13. A B New Dates Delta
  14. 0 2012-01-01 0 days 2012-01-01 0 days
  15. 1 2012-01-02 1 days 2012-01-03 -1 days
  16. 2 2012-01-03 2 days 2012-01-05 -2 days
  17. In [232]: df.dtypes
  18. Out[232]:
  19. A datetime64[ns]
  20. B timedelta64[ns]
  21. New Dates datetime64[ns]
  22. Delta timedelta64[ns]
  23. dtype: object

Another example

Values can be set to NaT using np.nan, similar to datetime

  1. In [233]: y = s - s.shift()
  2. In [234]: y
  3. Out[234]:
  4. 0 NaT
  5. 1 1 days
  6. 2 1 days
  7. dtype: timedelta64[ns]
  8. In [235]: y[1] = np.nan
  9. In [236]: y
  10. Out[236]:
  11. 0 NaT
  12. 1 NaT
  13. 2 1 days
  14. dtype: timedelta64[ns]

Aliasing axis names

To globally provide aliases for axis names, one can define these 2 functions:

  1. In [237]: def set_axis_alias(cls, axis, alias):
  2. .....: if axis not in cls._AXIS_NUMBERS:
  3. .....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
  4. .....: cls._AXIS_ALIASES[alias] = axis
  5. .....:
  1. In [238]: def clear_axis_alias(cls, axis, alias):
  2. .....: if axis not in cls._AXIS_NUMBERS:
  3. .....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
  4. .....: cls._AXIS_ALIASES.pop(alias, None)
  5. .....:
  1. In [239]: set_axis_alias(pd.DataFrame, 'columns', 'myaxis2')
  2. In [240]: df2 = pd.DataFrame(np.random.randn(3, 2), columns=['c1', 'c2'],
  3. .....: index=['i1', 'i2', 'i3'])
  4. .....:
  5. In [241]: df2.sum(axis='myaxis2')
  6. Out[241]:
  7. i1 -0.461013
  8. i2 2.040016
  9. i3 0.904681
  10. dtype: float64
  11. In [242]: clear_axis_alias(pd.DataFrame, 'columns', 'myaxis2')

Creating example data

To create a dataframe from every combination of some given values, like R’s expand.grid() function, we can create a dict where the keys are column names and the values are lists of the data values:

  1. In [243]: def expand_grid(data_dict):
  2. .....: rows = itertools.product(*data_dict.values())
  3. .....: return pd.DataFrame.from_records(rows, columns=data_dict.keys())
  4. .....:
  5. In [244]: df = expand_grid({'height': [60, 70],
  6. .....: 'weight': [100, 140, 180],
  7. .....: 'sex': ['Male', 'Female']})
  8. .....:
  9. In [245]: df
  10. Out[245]:
  11. height weight sex
  12. 0 60 100 Male
  13. 1 60 100 Female
  14. 2 60 140 Male
  15. 3 60 140 Female
  16. 4 60 180 Male
  17. 5 60 180 Female
  18. 6 70 100 Male
  19. 7 70 100 Female
  20. 8 70 140 Male
  21. 9 70 140 Female
  22. 10 70 180 Male
  23. 11 70 180 Female