Categorical data

This is an introduction to pandas categorical data type, including a short comparison with R’s factor.

Categoricals are a pandas data type corresponding to categorical variables in statistics. A categorical variable takes on a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood type, country affiliation, observation time or rating via Likert scales.

In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or ‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, …) are not possible.

All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical order of the values. Internally, the data structure consists of a categories array and an integer array of codes which point to the real value in the categories array.

The categorical data type is useful in the following cases:

  • A string variable consisting of only a few different values. Converting such a string variable to a categorical variable will save some memory, see here.
  • The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”). By converting to a categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of the lexical order, see here.
  • As a signal to other Python libraries that this column should be treated as a categorical variable (e.g. to use suitable statistical methods or plot types).

See also the API docs on categoricals.

Object creation

Series creation

Categorical Series or columns in a DataFrame can be created in several ways:

By specifying dtype="category" when constructing a Series:

  1. In [1]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
  2. In [2]: s
  3. Out[2]:
  4. 0 a
  5. 1 b
  6. 2 c
  7. 3 a
  8. dtype: category
  9. Categories (3, object): [a, b, c]

By converting an existing Series or column to a category dtype:

  1. In [3]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
  2. In [4]: df["B"] = df["A"].astype('category')
  3. In [5]: df
  4. Out[5]:
  5. A B
  6. 0 a a
  7. 1 b b
  8. 2 c c
  9. 3 a a

By using special functions, such as cut(), which groups data into discrete bins. See the example on tiling in the docs.

  1. In [6]: df = pd.DataFrame({'value': np.random.randint(0, 100, 20)})
  2. In [7]: labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]
  3. In [8]: df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)
  4. In [9]: df.head(10)
  5. Out[9]:
  6. value group
  7. 0 65 60 - 69
  8. 1 49 40 - 49
  9. 2 56 50 - 59
  10. 3 43 40 - 49
  11. 4 43 40 - 49
  12. 5 91 90 - 99
  13. 6 32 30 - 39
  14. 7 87 80 - 89
  15. 8 36 30 - 39
  16. 9 8 0 - 9

By passing a pandas.Categorical object to a Series or assigning it to a DataFrame.

  1. In [10]: raw_cat = pd.Categorical(["a", "b", "c", "a"], categories=["b", "c", "d"],
  2. ....: ordered=False)
  3. ....:
  4. In [11]: s = pd.Series(raw_cat)
  5. In [12]: s
  6. Out[12]:
  7. 0 NaN
  8. 1 b
  9. 2 c
  10. 3 NaN
  11. dtype: category
  12. Categories (3, object): [b, c, d]
  13. In [13]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
  14. In [14]: df["B"] = raw_cat
  15. In [15]: df
  16. Out[15]:
  17. A B
  18. 0 a NaN
  19. 1 b b
  20. 2 c c
  21. 3 a NaN

Categorical data has a specific category dtype:

  1. In [16]: df.dtypes
  2. Out[16]:
  3. A object
  4. B category
  5. dtype: object

DataFrame creation

Similar to the previous section where a single column was converted to categorical, all columns in a DataFrame can be batch converted to categorical either during or after construction.

This can be done during construction by specifying dtype="category" in the DataFrame constructor:

  1. In [17]: df = pd.DataFrame({'A': list('abca'), 'B': list('bccd')}, dtype="category")
  2. In [18]: df.dtypes
  3. Out[18]:
  4. A category
  5. B category
  6. dtype: object

Note that the categories present in each column differ; the conversion is done column by column, so only labels present in a given column are categories:

  1. In [19]: df['A']
  2. Out[19]:
  3. 0 a
  4. 1 b
  5. 2 c
  6. 3 a
  7. Name: A, dtype: category
  8. Categories (3, object): [a, b, c]
  9. In [20]: df['B']
  10. Out[20]:
  11. 0 b
  12. 1 c
  13. 2 c
  14. 3 d
  15. Name: B, dtype: category
  16. Categories (3, object): [b, c, d]

New in version 0.23.0.

Analogously, all columns in an existing DataFrame can be batch converted using DataFrame.astype():

  1. In [21]: df = pd.DataFrame({'A': list('abca'), 'B': list('bccd')})
  2. In [22]: df_cat = df.astype('category')
  3. In [23]: df_cat.dtypes
  4. Out[23]:
  5. A category
  6. B category
  7. dtype: object

This conversion is likewise done column by column:

  1. In [24]: df_cat['A']
  2. Out[24]:
  3. 0 a
  4. 1 b
  5. 2 c
  6. 3 a
  7. Name: A, dtype: category
  8. Categories (3, object): [a, b, c]
  9. In [25]: df_cat['B']
  10. Out[25]:
  11. 0 b
  12. 1 c
  13. 2 c
  14. 3 d
  15. Name: B, dtype: category
  16. Categories (3, object): [b, c, d]

Controlling behavior

In the examples above where we passed dtype='category', we used the default behavior:

  1. Categories are inferred from the data.
  2. Categories are unordered.

To control those behaviors, instead of passing 'category', use an instance of CategoricalDtype.

  1. In [26]: from pandas.api.types import CategoricalDtype
  2. In [27]: s = pd.Series(["a", "b", "c", "a"])
  3. In [28]: cat_type = CategoricalDtype(categories=["b", "c", "d"],
  4. ....: ordered=True)
  5. ....:
  6. In [29]: s_cat = s.astype(cat_type)
  7. In [30]: s_cat
  8. Out[30]:
  9. 0 NaN
  10. 1 b
  11. 2 c
  12. 3 NaN
  13. dtype: category
  14. Categories (3, object): [b < c < d]

Similarly, a CategoricalDtype can be used with a DataFrame to ensure that categories are consistent among all columns.

  1. In [31]: from pandas.api.types import CategoricalDtype
  2. In [32]: df = pd.DataFrame({'A': list('abca'), 'B': list('bccd')})
  3. In [33]: cat_type = CategoricalDtype(categories=list('abcd'),
  4. ....: ordered=True)
  5. ....:
  6. In [34]: df_cat = df.astype(cat_type)
  7. In [35]: df_cat['A']
  8. Out[35]:
  9. 0 a
  10. 1 b
  11. 2 c
  12. 3 a
  13. Name: A, dtype: category
  14. Categories (4, object): [a < b < c < d]
  15. In [36]: df_cat['B']
  16. Out[36]:
  17. 0 b
  18. 1 c
  19. 2 c
  20. 3 d
  21. Name: B, dtype: category
  22. Categories (4, object): [a < b < c < d]

::: tip Note

To perform table-wise conversion, where all labels in the entire DataFrame are used as categories for each column, the categories parameter can be determined programmatically by categories = pd.unique(df.to_numpy().ravel()).

:::

If you already have codes and categories, you can use the from_codes() constructor to save the factorize step during normal constructor mode:

  1. In [37]: splitter = np.random.choice([0, 1], 5, p=[0.5, 0.5])
  2. In [38]: s = pd.Series(pd.Categorical.from_codes(splitter,
  3. ....: categories=["train", "test"]))
  4. ....:

Regaining original data

To get back to the original Series or NumPy array, use Series.astype(original_dtype) or np.asarray(categorical):

  1. In [39]: s = pd.Series(["a", "b", "c", "a"])
  2. In [40]: s
  3. Out[40]:
  4. 0 a
  5. 1 b
  6. 2 c
  7. 3 a
  8. dtype: object
  9. In [41]: s2 = s.astype('category')
  10. In [42]: s2
  11. Out[42]:
  12. 0 a
  13. 1 b
  14. 2 c
  15. 3 a
  16. dtype: category
  17. Categories (3, object): [a, b, c]
  18. In [43]: s2.astype(str)
  19. Out[43]:
  20. 0 a
  21. 1 b
  22. 2 c
  23. 3 a
  24. dtype: object
  25. In [44]: np.asarray(s2)
  26. Out[44]: array(['a', 'b', 'c', 'a'], dtype=object)

::: tip Note

In contrast to R’s factor function, categorical data is not converting input values to strings; categories will end up the same data type as the original values.

:::

::: tip Note

In contrast to R’s factor function, there is currently no way to assign/change labels at creation time. Use categories to change the categories after creation time.

:::

CategoricalDtype

Changed in version 0.21.0.

A categorical’s type is fully described by

  1. categories: a sequence of unique values and no missing values
  2. ordered: a boolean

This information can be stored in a CategoricalDtype. The categories argument is optional, which implies that the actual categories should be inferred from whatever is present in the data when the pandas.Categorical is created. The categories are assumed to be unordered by default.

  1. In [45]: from pandas.api.types import CategoricalDtype
  2. In [46]: CategoricalDtype(['a', 'b', 'c'])
  3. Out[46]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=None)
  4. In [47]: CategoricalDtype(['a', 'b', 'c'], ordered=True)
  5. Out[47]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=True)
  6. In [48]: CategoricalDtype()
  7. Out[48]: CategoricalDtype(categories=None, ordered=None)

A CategoricalDtype can be used in any place pandas expects a dtype. For example pandas.read_csv(), pandas.DataFrame.astype(), or in the Series constructor.

::: tip Note

As a convenience, you can use the string 'category' in place of a CategoricalDtype when you want the default behavior of the categories being unordered, and equal to the set values present in the array. In other words, dtype='category' is equivalent to dtype=CategoricalDtype().

:::

Equality semantics

Two instances of CategoricalDtype compare equal whenever they have the same categories and order. When comparing two unordered categoricals, the order of the categories is not considered.

  1. In [49]: c1 = CategoricalDtype(['a', 'b', 'c'], ordered=False)
  2. # Equal, since order is not considered when ordered=False
  3. In [50]: c1 == CategoricalDtype(['b', 'c', 'a'], ordered=False)
  4. Out[50]: True
  5. # Unequal, since the second CategoricalDtype is ordered
  6. In [51]: c1 == CategoricalDtype(['a', 'b', 'c'], ordered=True)
  7. Out[51]: False

All instances of CategoricalDtype compare equal to the string 'category'.

  1. In [52]: c1 == 'category'
  2. Out[52]: True

::: danger Warning

Since dtype='category' is essentially CategoricalDtype(None, False), and since all instances CategoricalDtype compare equal to 'category', all instances of CategoricalDtype compare equal to a CategoricalDtype(None, False), regardless of categories or ordered.

:::

Description

Using describe() on categorical data will produce similar output to a Series or DataFrame of type string.

  1. In [53]: cat = pd.Categorical(["a", "c", "c", np.nan], categories=["b", "a", "c"])
  2. In [54]: df = pd.DataFrame({"cat": cat, "s": ["a", "c", "c", np.nan]})
  3. In [55]: df.describe()
  4. Out[55]:
  5. cat s
  6. count 3 3
  7. unique 2 2
  8. top c c
  9. freq 2 2
  10. In [56]: df["cat"].describe()
  11. Out[56]:
  12. count 3
  13. unique 2
  14. top c
  15. freq 2
  16. Name: cat, dtype: object

Working with categories

Categorical data has a categories and a ordered property, which list their possible values and whether the ordering matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you don’t manually specify categories and ordering, they are inferred from the passed arguments.

  1. In [57]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
  2. In [58]: s.cat.categories
  3. Out[58]: Index(['a', 'b', 'c'], dtype='object')
  4. In [59]: s.cat.ordered
  5. Out[59]: False

It’s also possible to pass in the categories in a specific order:

  1. In [60]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"],
  2. ....: categories=["c", "b", "a"]))
  3. ....:
  4. In [61]: s.cat.categories
  5. Out[61]: Index(['c', 'b', 'a'], dtype='object')
  6. In [62]: s.cat.ordered
  7. Out[62]: False

::: tip Note

New categorical data are not automatically ordered. You must explicitly pass ordered=True to indicate an ordered Categorical.

:::

::: tip Note

The result of unique() is not always the same as Series.cat.categories, because Series.unique() has a couple of guarantees, namely that it returns categories in the order of appearance, and it only includes values that are actually present.

  1. In [63]: s = pd.Series(list('babc')).astype(CategoricalDtype(list('abcd')))
  2. In [64]: s
  3. Out[64]:
  4. 0 b
  5. 1 a
  6. 2 b
  7. 3 c
  8. dtype: category
  9. Categories (4, object): [a, b, c, d]
  10. # categories
  11. In [65]: s.cat.categories
  12. Out[65]: Index(['a', 'b', 'c', 'd'], dtype='object')
  13. # uniques
  14. In [66]: s.unique()
  15. Out[66]:
  16. [b, a, c]
  17. Categories (3, object): [b, a, c]

:::

Renaming categories

Renaming categories is done by assigning new values to the Series.cat.categories property or by using the rename_categories() method:

  1. In [67]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
  2. In [68]: s
  3. Out[68]:
  4. 0 a
  5. 1 b
  6. 2 c
  7. 3 a
  8. dtype: category
  9. Categories (3, object): [a, b, c]
  10. In [69]: s.cat.categories = ["Group %s" % g for g in s.cat.categories]
  11. In [70]: s
  12. Out[70]:
  13. 0 Group a
  14. 1 Group b
  15. 2 Group c
  16. 3 Group a
  17. dtype: category
  18. Categories (3, object): [Group a, Group b, Group c]
  19. In [71]: s = s.cat.rename_categories([1, 2, 3])
  20. In [72]: s
  21. Out[72]:
  22. 0 1
  23. 1 2
  24. 2 3
  25. 3 1
  26. dtype: category
  27. Categories (3, int64): [1, 2, 3]
  28. # You can also pass a dict-like object to map the renaming
  29. In [73]: s = s.cat.rename_categories({1: 'x', 2: 'y', 3: 'z'})
  30. In [74]: s
  31. Out[74]:
  32. 0 x
  33. 1 y
  34. 2 z
  35. 3 x
  36. dtype: category
  37. Categories (3, object): [x, y, z]

::: tip Note

In contrast to R’s factor, categorical data can have categories of other types than string.

:::

::: tip Note

Be aware that assigning new categories is an inplace operation, while most other operations under Series.cat per default return a new Series of dtype category.

:::

Categories must be unique or a ValueError is raised:

  1. In [75]: try:
  2. ....: s.cat.categories = [1, 1, 1]
  3. ....: except ValueError as e:
  4. ....: print("ValueError:", str(e))
  5. ....:
  6. ValueError: Categorical categories must be unique

Categories must also not be NaN or a ValueError is raised:

  1. In [76]: try:
  2. ....: s.cat.categories = [1, 2, np.nan]
  3. ....: except ValueError as e:
  4. ....: print("ValueError:", str(e))
  5. ....:
  6. ValueError: Categorial categories cannot be null

Appending new categories

Appending categories can be done by using the add_categories() method:

  1. In [77]: s = s.cat.add_categories([4])
  2. In [78]: s.cat.categories
  3. Out[78]: Index(['x', 'y', 'z', 4], dtype='object')
  4. In [79]: s
  5. Out[79]:
  6. 0 x
  7. 1 y
  8. 2 z
  9. 3 x
  10. dtype: category
  11. Categories (4, object): [x, y, z, 4]

Removing categories

Removing categories can be done by using the remove_categories() method. Values which are removed are replaced by np.nan.:

  1. In [80]: s = s.cat.remove_categories([4])
  2. In [81]: s
  3. Out[81]:
  4. 0 x
  5. 1 y
  6. 2 z
  7. 3 x
  8. dtype: category
  9. Categories (3, object): [x, y, z]

Removing unused categories

Removing unused categories can also be done:

  1. In [82]: s = pd.Series(pd.Categorical(["a", "b", "a"],
  2. ....: categories=["a", "b", "c", "d"]))
  3. ....:
  4. In [83]: s
  5. Out[83]:
  6. 0 a
  7. 1 b
  8. 2 a
  9. dtype: category
  10. Categories (4, object): [a, b, c, d]
  11. In [84]: s.cat.remove_unused_categories()
  12. Out[84]:
  13. 0 a
  14. 1 b
  15. 2 a
  16. dtype: category
  17. Categories (2, object): [a, b]

Setting categories

If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the categories to a predefined scale, use set_categories().

  1. In [85]: s = pd.Series(["one", "two", "four", "-"], dtype="category")
  2. In [86]: s
  3. Out[86]:
  4. 0 one
  5. 1 two
  6. 2 four
  7. 3 -
  8. dtype: category
  9. Categories (4, object): [-, four, one, two]
  10. In [87]: s = s.cat.set_categories(["one", "two", "three", "four"])
  11. In [88]: s
  12. Out[88]:
  13. 0 one
  14. 1 two
  15. 2 four
  16. 3 NaN
  17. dtype: category
  18. Categories (4, object): [one, two, three, four]

::: tip Note

Be aware that Categorical.set_categories() cannot know whether some category is omitted intentionally or because it is misspelled or (under Python3) due to a type difference (e.g., NumPy S1 dtype and Python strings). This can result in surprising behaviour!

:::

Sorting and order

If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.

  1. In [89]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], ordered=False))
  2. In [90]: s.sort_values(inplace=True)
  3. In [91]: s = pd.Series(["a", "b", "c", "a"]).astype(
  4. ....: CategoricalDtype(ordered=True)
  5. ....: )
  6. ....:
  7. In [92]: s.sort_values(inplace=True)
  8. In [93]: s
  9. Out[93]:
  10. 0 a
  11. 3 a
  12. 1 b
  13. 2 c
  14. dtype: category
  15. Categories (3, object): [a < b < c]
  16. In [94]: s.min(), s.max()
  17. Out[94]: ('a', 'c')

You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered(). These will by default return a new object.

  1. In [95]: s.cat.as_ordered()
  2. Out[95]:
  3. 0 a
  4. 3 a
  5. 1 b
  6. 2 c
  7. dtype: category
  8. Categories (3, object): [a < b < c]
  9. In [96]: s.cat.as_unordered()
  10. Out[96]:
  11. 0 a
  12. 3 a
  13. 1 b
  14. 2 c
  15. dtype: category
  16. Categories (3, object): [a, b, c]

Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for strings and numeric data:

  1. In [97]: s = pd.Series([1, 2, 3, 1], dtype="category")
  2. In [98]: s = s.cat.set_categories([2, 3, 1], ordered=True)
  3. In [99]: s
  4. Out[99]:
  5. 0 1
  6. 1 2
  7. 2 3
  8. 3 1
  9. dtype: category
  10. Categories (3, int64): [2 < 3 < 1]
  11. In [100]: s.sort_values(inplace=True)
  12. In [101]: s
  13. Out[101]:
  14. 1 2
  15. 2 3
  16. 0 1
  17. 3 1
  18. dtype: category
  19. Categories (3, int64): [2 < 3 < 1]
  20. In [102]: s.min(), s.max()
  21. Out[102]: (2, 1)

Reordering

Reordering the categories is possible via the Categorical.reorder_categories() and the Categorical.set_categories() methods. For Categorical.reorder_categories(), all old categories must be included in the new categories and no new categories are allowed. This will necessarily make the sort order the same as the categories order.

  1. In [103]: s = pd.Series([1, 2, 3, 1], dtype="category")
  2. In [104]: s = s.cat.reorder_categories([2, 3, 1], ordered=True)
  3. In [105]: s
  4. Out[105]:
  5. 0 1
  6. 1 2
  7. 2 3
  8. 3 1
  9. dtype: category
  10. Categories (3, int64): [2 < 3 < 1]
  11. In [106]: s.sort_values(inplace=True)
  12. In [107]: s
  13. Out[107]:
  14. 1 2
  15. 2 3
  16. 0 1
  17. 3 1
  18. dtype: category
  19. Categories (3, int64): [2 < 3 < 1]
  20. In [108]: s.min(), s.max()
  21. Out[108]: (2, 1)

::: tip Note

Note the difference between assigning new categories and reordering the categories: the first renames categories and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still be sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values in the Series are changed.

:::

::: tip Note

If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Numeric operations like +, -, *, / and operations based on them (e.g. Series.median(), which would need to compute the mean between two values if the length of an array is even) do not work and raise a TypeError.

:::

Multi column sorting

A categorical dtyped column will participate in a multi-column sort in a similar manner to other columns. The ordering of the categorical is determined by the categories of that column.

  1. In [109]: dfs = pd.DataFrame({'A': pd.Categorical(list('bbeebbaa'),
  2. .....: categories=['e', 'a', 'b'],
  3. .....: ordered=True),
  4. .....: 'B': [1, 2, 1, 2, 2, 1, 2, 1]})
  5. .....:
  6. In [110]: dfs.sort_values(by=['A', 'B'])
  7. Out[110]:
  8. A B
  9. 2 e 1
  10. 3 e 2
  11. 7 a 1
  12. 6 a 2
  13. 0 b 1
  14. 5 b 1
  15. 1 b 2
  16. 4 b 2

Reordering the categories changes a future sort.

  1. In [111]: dfs['A'] = dfs['A'].cat.reorder_categories(['a', 'b', 'e'])
  2. In [112]: dfs.sort_values(by=['A', 'B'])
  3. Out[112]:
  4. A B
  5. 7 a 1
  6. 6 a 2
  7. 0 b 1
  8. 5 b 1
  9. 1 b 2
  10. 4 b 2
  11. 2 e 1
  12. 3 e 2

Comparisons

Comparing categorical data with other objects is possible in three cases:

  • Comparing equality (== and !=) to a list-like object (list, Series, array, …) of the same length as the categorical data.
  • All comparisons (==, !=, >, >=, <, and <=) of categorical data to another categorical Series, when ordered==True and the categories are the same.
  • All comparisons of a categorical data to a scalar.

All other comparisons, especially “non-equality” comparisons of two categoricals with different categories or a categorical with any list-like object, will raise a TypeError.

::: tip Note

Any “non-equality” comparisons of categorical data with a Series, np.array, list or categorical data with different categories or ordering will raise a TypeError because custom categories ordering could be interpreted in two ways: one with taking into account the ordering and one without.

:::

  1. In [113]: cat = pd.Series([1, 2, 3]).astype(
  2. .....: CategoricalDtype([3, 2, 1], ordered=True)
  3. .....: )
  4. .....:
  5. In [114]: cat_base = pd.Series([2, 2, 2]).astype(
  6. .....: CategoricalDtype([3, 2, 1], ordered=True)
  7. .....: )
  8. .....:
  9. In [115]: cat_base2 = pd.Series([2, 2, 2]).astype(
  10. .....: CategoricalDtype(ordered=True)
  11. .....: )
  12. .....:
  13. In [116]: cat
  14. Out[116]:
  15. 0 1
  16. 1 2
  17. 2 3
  18. dtype: category
  19. Categories (3, int64): [3 < 2 < 1]
  20. In [117]: cat_base
  21. Out[117]:
  22. 0 2
  23. 1 2
  24. 2 2
  25. dtype: category
  26. Categories (3, int64): [3 < 2 < 1]
  27. In [118]: cat_base2
  28. Out[118]:
  29. 0 2
  30. 1 2
  31. 2 2
  32. dtype: category
  33. Categories (1, int64): [2]

Comparing to a categorical with the same categories and ordering or to a scalar works:

  1. In [119]: cat > cat_base
  2. Out[119]:
  3. 0 True
  4. 1 False
  5. 2 False
  6. dtype: bool
  7. In [120]: cat > 2
  8. Out[120]:
  9. 0 True
  10. 1 False
  11. 2 False
  12. dtype: bool

Equality comparisons work with any list-like object of same length and scalars:

  1. In [121]: cat == cat_base
  2. Out[121]:
  3. 0 False
  4. 1 True
  5. 2 False
  6. dtype: bool
  7. In [122]: cat == np.array([1, 2, 3])
  8. Out[122]:
  9. 0 True
  10. 1 True
  11. 2 True
  12. dtype: bool
  13. In [123]: cat == 2
  14. Out[123]:
  15. 0 False
  16. 1 True
  17. 2 False
  18. dtype: bool

This doesn’t work because the categories are not the same:

  1. In [124]: try:
  2. .....: cat > cat_base2
  3. .....: except TypeError as e:
  4. .....: print("TypeError:", str(e))
  5. .....:
  6. TypeError: Categoricals can only be compared if 'categories' are the same. Categories are different lengths

If you want to do a “non-equality” comparison of a categorical series with a list-like object which is not categorical data, you need to be explicit and convert the categorical data back to the original values:

  1. In [125]: base = np.array([1, 2, 3])
  2. In [126]: try:
  3. .....: cat > base
  4. .....: except TypeError as e:
  5. .....: print("TypeError:", str(e))
  6. .....:
  7. TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray'>.
  8. If you want to compare values, use 'np.asarray(cat) <op> other'.
  9. In [127]: np.asarray(cat) > base
  10. Out[127]: array([False, False, False])

When you compare two unordered categoricals with the same categories, the order is not considered:

  1. In [128]: c1 = pd.Categorical(['a', 'b'], categories=['a', 'b'], ordered=False)
  2. In [129]: c2 = pd.Categorical(['a', 'b'], categories=['b', 'a'], ordered=False)
  3. In [130]: c1 == c2
  4. Out[130]: array([ True, True])

Operations

Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with categorical data:

Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data:

  1. In [131]: s = pd.Series(pd.Categorical(["a", "b", "c", "c"],
  2. .....: categories=["c", "a", "b", "d"]))
  3. .....:
  4. In [132]: s.value_counts()
  5. Out[132]:
  6. c 2
  7. b 1
  8. a 1
  9. d 0
  10. dtype: int64

Groupby will also show “unused” categories:

  1. In [133]: cats = pd.Categorical(["a", "b", "b", "b", "c", "c", "c"],
  2. .....: categories=["a", "b", "c", "d"])
  3. .....:
  4. In [134]: df = pd.DataFrame({"cats": cats, "values": [1, 2, 2, 2, 3, 4, 5]})
  5. In [135]: df.groupby("cats").mean()
  6. Out[135]:
  7. values
  8. cats
  9. a 1.0
  10. b 2.0
  11. c 4.0
  12. d NaN
  13. In [136]: cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
  14. In [137]: df2 = pd.DataFrame({"cats": cats2,
  15. .....: "B": ["c", "d", "c", "d"],
  16. .....: "values": [1, 2, 3, 4]})
  17. .....:
  18. In [138]: df2.groupby(["cats", "B"]).mean()
  19. Out[138]:
  20. values
  21. cats B
  22. a c 1.0
  23. d 2.0
  24. b c 3.0
  25. d 4.0
  26. c c NaN
  27. d NaN

Pivot tables:

  1. In [139]: raw_cat = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
  2. In [140]: df = pd.DataFrame({"A": raw_cat,
  3. .....: "B": ["c", "d", "c", "d"],
  4. .....: "values": [1, 2, 3, 4]})
  5. .....:
  6. In [141]: pd.pivot_table(df, values='values', index=['A', 'B'])
  7. Out[141]:
  8. values
  9. A B
  10. a c 1
  11. d 2
  12. b c 3
  13. d 4

Data munging

The optimized pandas data access methods .loc, .iloc, .at, and .iat, work as normal. The only difference is the return type (for getting) and that only values already in categories can be assigned.

Getting

If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.

  1. In [142]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
  2. In [143]: cats = pd.Series(["a", "b", "b", "b", "c", "c", "c"],
  3. .....: dtype="category", index=idx)
  4. .....:
  5. In [144]: values = [1, 2, 2, 2, 3, 4, 5]
  6. In [145]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)
  7. In [146]: df.iloc[2:4, :]
  8. Out[146]:
  9. cats values
  10. j b 2
  11. k b 2
  12. In [147]: df.iloc[2:4, :].dtypes
  13. Out[147]:
  14. cats category
  15. values int64
  16. dtype: object
  17. In [148]: df.loc["h":"j", "cats"]
  18. Out[148]:
  19. h a
  20. i b
  21. j b
  22. Name: cats, dtype: category
  23. Categories (3, object): [a, b, c]
  24. In [149]: df[df["cats"] == "b"]
  25. Out[149]:
  26. cats values
  27. i b 2
  28. j b 2
  29. k b 2

An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype object:

  1. # get the complete "h" row as a Series
  2. In [150]: df.loc["h", :]
  3. Out[150]:
  4. cats a
  5. values 1
  6. Name: h, dtype: object

Returning a single item from categorical data will also return the value, not a categorical of length “1”.

  1. In [151]: df.iat[0, 0]
  2. Out[151]: 'a'
  3. In [152]: df["cats"].cat.categories = ["x", "y", "z"]
  4. In [153]: df.at["h", "cats"] # returns a string
  5. Out[153]: 'x'

::: tip Note

The is in contrast to R’s factor function, where factor(c(1,2,3))[1] returns a single value factor.

:::

To get a single value Series of type category, you pass in a list with a single value:

  1. In [154]: df.loc[["h"], "cats"]
  2. Out[154]:
  3. h x
  4. Name: cats, dtype: category
  5. Categories (3, object): [x, y, z]

String and datetime accessors

The accessors .dt and .str will work if the s.cat.categories are of an appropriate type:

  1. In [155]: str_s = pd.Series(list('aabb'))
  2. In [156]: str_cat = str_s.astype('category')
  3. In [157]: str_cat
  4. Out[157]:
  5. 0 a
  6. 1 a
  7. 2 b
  8. 3 b
  9. dtype: category
  10. Categories (2, object): [a, b]
  11. In [158]: str_cat.str.contains("a")
  12. Out[158]:
  13. 0 True
  14. 1 True
  15. 2 False
  16. 3 False
  17. dtype: bool
  18. In [159]: date_s = pd.Series(pd.date_range('1/1/2015', periods=5))
  19. In [160]: date_cat = date_s.astype('category')
  20. In [161]: date_cat
  21. Out[161]:
  22. 0 2015-01-01
  23. 1 2015-01-02
  24. 2 2015-01-03
  25. 3 2015-01-04
  26. 4 2015-01-05
  27. dtype: category
  28. Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-01-05]
  29. In [162]: date_cat.dt.day
  30. Out[162]:
  31. 0 1
  32. 1 2
  33. 2 3
  34. 3 4
  35. 4 5
  36. dtype: int64

::: tip Note

The returned Series (or DataFrame) is of the same type as if you used the .str. / .dt. on a Series of that type (and not of type category!).

:::

That means, that the returned values from methods and properties on the accessors of a Series and the returned values from methods and properties on the accessors of this Series transformed to one of type category will be equal:

  1. In [163]: ret_s = str_s.str.contains("a")
  2. In [164]: ret_cat = str_cat.str.contains("a")
  3. In [165]: ret_s.dtype == ret_cat.dtype
  4. Out[165]: True
  5. In [166]: ret_s == ret_cat
  6. Out[166]:
  7. 0 True
  8. 1 True
  9. 2 True
  10. 3 True
  11. dtype: bool

::: tip Note

The work is done on the categories and then a new Series is constructed. This has some performance implication if you have a Series of type string, where lots of elements are repeated (i.e. the number of unique elements in the Series is a lot smaller than the length of the Series). In this case it can be faster to convert the original Series to one of type category and use .str. or .dt. on that.

:::

Setting

Setting values in a categorical column (or Series) works as long as the value is included in the categories:

  1. In [167]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
  2. In [168]: cats = pd.Categorical(["a", "a", "a", "a", "a", "a", "a"],
  3. .....: categories=["a", "b"])
  4. .....:
  5. In [169]: values = [1, 1, 1, 1, 1, 1, 1]
  6. In [170]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)
  7. In [171]: df.iloc[2:4, :] = [["b", 2], ["b", 2]]
  8. In [172]: df
  9. Out[172]:
  10. cats values
  11. h a 1
  12. i a 1
  13. j b 2
  14. k b 2
  15. l a 1
  16. m a 1
  17. n a 1
  18. In [173]: try:
  19. .....: df.iloc[2:4, :] = [["c", 3], ["c", 3]]
  20. .....: except ValueError as e:
  21. .....: print("ValueError:", str(e))
  22. .....:
  23. ValueError: Cannot setitem on a Categorical with a new category, set the categories first

Setting values by assigning categorical data will also check that the categories match:

  1. In [174]: df.loc["j":"k", "cats"] = pd.Categorical(["a", "a"], categories=["a", "b"])
  2. In [175]: df
  3. Out[175]:
  4. cats values
  5. h a 1
  6. i a 1
  7. j a 2
  8. k a 2
  9. l a 1
  10. m a 1
  11. n a 1
  12. In [176]: try:
  13. .....: df.loc["j":"k", "cats"] = pd.Categorical(["b", "b"],
  14. .....: categories=["a", "b", "c"])
  15. .....: except ValueError as e:
  16. .....: print("ValueError:", str(e))
  17. .....:
  18. ValueError: Cannot set a Categorical with another, without identical categories

Assigning a Categorical to parts of a column of other types will use the values:

  1. In [177]: df = pd.DataFrame({"a": [1, 1, 1, 1, 1], "b": ["a", "a", "a", "a", "a"]})
  2. In [178]: df.loc[1:2, "a"] = pd.Categorical(["b", "b"], categories=["a", "b"])
  3. In [179]: df.loc[2:3, "b"] = pd.Categorical(["b", "b"], categories=["a", "b"])
  4. In [180]: df
  5. Out[180]:
  6. a b
  7. 0 1 a
  8. 1 b a
  9. 2 b b
  10. 3 1 b
  11. 4 1 a
  12. In [181]: df.dtypes
  13. Out[181]:
  14. a object
  15. b object
  16. dtype: object

Merging

You can concat two DataFrames containing categorical data together, but the categories of these categoricals need to be the same:

  1. In [182]: cat = pd.Series(["a", "b"], dtype="category")
  2. In [183]: vals = [1, 2]
  3. In [184]: df = pd.DataFrame({"cats": cat, "vals": vals})
  4. In [185]: res = pd.concat([df, df])
  5. In [186]: res
  6. Out[186]:
  7. cats vals
  8. 0 a 1
  9. 1 b 2
  10. 0 a 1
  11. 1 b 2
  12. In [187]: res.dtypes
  13. Out[187]:
  14. cats category
  15. vals int64
  16. dtype: object

In this case the categories are not the same, and therefore an error is raised:

  1. In [188]: df_different = df.copy()
  2. In [189]: df_different["cats"].cat.categories = ["c", "d"]
  3. In [190]: try:
  4. .....: pd.concat([df, df_different])
  5. .....: except ValueError as e:
  6. .....: print("ValueError:", str(e))
  7. .....:

The same applies to df.append(df_different).

See also the section on merge dtypes for notes about preserving merge dtypes and performance.

Unioning

New in version 0.19.0.

If you want to combine categoricals that do not necessarily have the same categories, the union_categoricals() function will combine a list-like of categoricals. The new categories will be the union of the categories being combined.

  1. In [191]: from pandas.api.types import union_categoricals
  2. In [192]: a = pd.Categorical(["b", "c"])
  3. In [193]: b = pd.Categorical(["a", "b"])
  4. In [194]: union_categoricals([a, b])
  5. Out[194]:
  6. [b, c, a, b]
  7. Categories (3, object): [b, c, a]

By default, the resulting categories will be ordered as they appear in the data. If you want the categories to be lexsorted, use sort_categories=True argument.

  1. In [195]: union_categoricals([a, b], sort_categories=True)
  2. Out[195]:
  3. [b, c, a, b]
  4. Categories (3, object): [a, b, c]

union_categoricals also works with the “easy” case of combining two categoricals of the same categories and order information (e.g. what you could also append for).

  1. In [196]: a = pd.Categorical(["a", "b"], ordered=True)
  2. In [197]: b = pd.Categorical(["a", "b", "a"], ordered=True)
  3. In [198]: union_categoricals([a, b])
  4. Out[198]:
  5. [a, b, a, b, a]
  6. Categories (2, object): [a < b]

The below raises TypeError because the categories are ordered and not identical.

  1. In [1]: a = pd.Categorical(["a", "b"], ordered=True)
  2. In [2]: b = pd.Categorical(["a", "b", "c"], ordered=True)
  3. In [3]: union_categoricals([a, b])
  4. Out[3]:
  5. TypeError: to union ordered Categoricals, all categories must be the same

New in version 0.20.0.

Ordered categoricals with different categories or orderings can be combined by using the ignore_ordered=True argument.

  1. In [199]: a = pd.Categorical(["a", "b", "c"], ordered=True)
  2. In [200]: b = pd.Categorical(["c", "b", "a"], ordered=True)
  3. In [201]: union_categoricals([a, b], ignore_order=True)
  4. Out[201]:
  5. [a, b, c, c, b, a]
  6. Categories (3, object): [a, b, c]

union_categoricals() also works with a CategoricalIndex, or Series containing categorical data, but note that the resulting array will always be a plain Categorical:

  1. In [202]: a = pd.Series(["b", "c"], dtype='category')
  2. In [203]: b = pd.Series(["a", "b"], dtype='category')
  3. In [204]: union_categoricals([a, b])
  4. Out[204]:
  5. [b, c, a, b]
  6. Categories (3, object): [b, c, a]

::: tip Note

union_categoricals may recode the integer codes for categories when combining categoricals. This is likely what you want, but if you are relying on the exact numbering of the categories, be aware.

  1. In [205]: c1 = pd.Categorical(["b", "c"])
  2. In [206]: c2 = pd.Categorical(["a", "b"])
  3. In [207]: c1
  4. Out[207]:
  5. [b, c]
  6. Categories (2, object): [b, c]
  7. # "b" is coded to 0
  8. In [208]: c1.codes
  9. Out[208]: array([0, 1], dtype=int8)
  10. In [209]: c2
  11. Out[209]:
  12. [a, b]
  13. Categories (2, object): [a, b]
  14. # "b" is coded to 1
  15. In [210]: c2.codes
  16. Out[210]: array([0, 1], dtype=int8)
  17. In [211]: c = union_categoricals([c1, c2])
  18. In [212]: c
  19. Out[212]:
  20. [b, c, a, b]
  21. Categories (3, object): [b, c, a]
  22. # "b" is coded to 0 throughout, same as c1, different from c2
  23. In [213]: c.codes
  24. Out[213]: array([0, 1, 2, 0], dtype=int8)

:::

Concatenation

This section describes concatenations specific to category dtype. See Concatenating objects for general description.

By default, Series or DataFrame concatenation which contains the same categories results in category dtype, otherwise results in object dtype. Use .astype or union_categoricals to get category result.

  1. # same categories
  2. In [214]: s1 = pd.Series(['a', 'b'], dtype='category')
  3. In [215]: s2 = pd.Series(['a', 'b', 'a'], dtype='category')
  4. In [216]: pd.concat([s1, s2])
  5. Out[216]:
  6. 0 a
  7. 1 b
  8. 0 a
  9. 1 b
  10. 2 a
  11. dtype: category
  12. Categories (2, object): [a, b]
  13. # different categories
  14. In [217]: s3 = pd.Series(['b', 'c'], dtype='category')
  15. In [218]: pd.concat([s1, s3])
  16. Out[218]:
  17. 0 a
  18. 1 b
  19. 0 b
  20. 1 c
  21. dtype: object
  22. In [219]: pd.concat([s1, s3]).astype('category')
  23. Out[219]:
  24. 0 a
  25. 1 b
  26. 0 b
  27. 1 c
  28. dtype: category
  29. Categories (3, object): [a, b, c]
  30. In [220]: union_categoricals([s1.array, s3.array])
  31. Out[220]:
  32. [a, b, b, c]
  33. Categories (3, object): [a, b, c]

Following table summarizes the results of Categoricals related concatenations.

arg1 arg2 result
category category (identical categories) category
category category (different categories, both not ordered) object (dtype is inferred)
category category (different categories, either one is ordered) object (dtype is inferred)
category not category object (dtype is inferred)

Getting data in/out

You can write data that contains category dtypes to a HDFStore. See here for an example and caveats.

It is also possible to write data to and reading data from Stata format files. See here for an example and caveats.

Writing to a CSV file will convert the data, effectively removing any information about the categorical (categories and ordering). So if you read back the CSV file you have to convert the relevant columns back to category and assign the right categories and categories ordering.

  1. In [221]: import io
  2. In [222]: s = pd.Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'd']))
  3. # rename the categories
  4. In [223]: s.cat.categories = ["very good", "good", "bad"]
  5. # reorder the categories and add missing categories
  6. In [224]: s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
  7. In [225]: df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]})
  8. In [226]: csv = io.StringIO()
  9. In [227]: df.to_csv(csv)
  10. In [228]: df2 = pd.read_csv(io.StringIO(csv.getvalue()))
  11. In [229]: df2.dtypes
  12. Out[229]:
  13. Unnamed: 0 int64
  14. cats object
  15. vals int64
  16. dtype: object
  17. In [230]: df2["cats"]
  18. Out[230]:
  19. 0 very good
  20. 1 good
  21. 2 good
  22. 3 very good
  23. 4 very good
  24. 5 bad
  25. Name: cats, dtype: object
  26. # Redo the category
  27. In [231]: df2["cats"] = df2["cats"].astype("category")
  28. In [232]: df2["cats"].cat.set_categories(["very bad", "bad", "medium",
  29. .....: "good", "very good"],
  30. .....: inplace=True)
  31. .....:
  32. In [233]: df2.dtypes
  33. Out[233]:
  34. Unnamed: 0 int64
  35. cats category
  36. vals int64
  37. dtype: object
  38. In [234]: df2["cats"]
  39. Out[234]:
  40. 0 very good
  41. 1 good
  42. 2 good
  43. 3 very good
  44. 4 very good
  45. 5 bad
  46. Name: cats, dtype: category
  47. Categories (5, object): [very bad, bad, medium, good, very good]

The same holds for writing to a SQL database with to_sql.

Missing data

pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section.

Missing values should not be included in the Categorical’s categories, only in the values. Instead, it is understood that NaN is different, and is always a possibility. When working with the Categorical’s codes, missing values will always have a code of -1.

  1. In [235]: s = pd.Series(["a", "b", np.nan, "a"], dtype="category")
  2. # only two categories
  3. In [236]: s
  4. Out[236]:
  5. 0 a
  6. 1 b
  7. 2 NaN
  8. 3 a
  9. dtype: category
  10. Categories (2, object): [a, b]
  11. In [237]: s.cat.codes
  12. Out[237]:
  13. 0 0
  14. 1 1
  15. 2 -1
  16. 3 0
  17. dtype: int8

Methods for working with missing data, e.g. isna(), fillna(), dropna(), all work normally:

  1. In [238]: s = pd.Series(["a", "b", np.nan], dtype="category")
  2. In [239]: s
  3. Out[239]:
  4. 0 a
  5. 1 b
  6. 2 NaN
  7. dtype: category
  8. Categories (2, object): [a, b]
  9. In [240]: pd.isna(s)
  10. Out[240]:
  11. 0 False
  12. 1 False
  13. 2 True
  14. dtype: bool
  15. In [241]: s.fillna("a")
  16. Out[241]:
  17. 0 a
  18. 1 b
  19. 2 a
  20. dtype: category
  21. Categories (2, object): [a, b]

Differences to R’s factor

The following differences to R’s factor functions can be observed:

  • R’s levels are named categories.
  • R’s levels are always of type string, while categories in pandas can be of any dtype.
  • It’s not possible to specify labels at creation time. Use s.cat.rename_categories(new_labels) afterwards.
  • In contrast to R’s factor function, using categorical data as the sole input to create a new categorical series will not remove unused categories but create a new categorical series which is equal to the passed in one!
  • R allows for missing values to be included in its levels (pandas’ categories). Pandas does not allow NaN categories, but missing values can still be in the values.

Gotchas

Memory usage

The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In contrast, an object dtype is a constant times the length of the data.

  1. In [242]: s = pd.Series(['foo', 'bar'] * 1000)
  2. # object dtype
  3. In [243]: s.nbytes
  4. Out[243]: 16000
  5. # category dtype
  6. In [244]: s.astype('category').nbytes
  7. Out[244]: 2016

::: tip Note

If the number of categories approaches the length of the data, the Categorical will use nearly the same or more memory than an equivalent object dtype representation.

  1. In [245]: s = pd.Series(['foo%04d' % i for i in range(2000)])
  2. # object dtype
  3. In [246]: s.nbytes
  4. Out[246]: 16000
  5. # category dtype
  6. In [247]: s.astype('category').nbytes
  7. Out[247]: 20000

:::

Categorical is not a numpy array

Currently, categorical data and the underlying Categorical is implemented as a Python object and not as a low-level NumPy array dtype. This leads to some problems.

NumPy itself doesn’t know about the new dtype:

  1. In [248]: try:
  2. .....: np.dtype("category")
  3. .....: except TypeError as e:
  4. .....: print("TypeError:", str(e))
  5. .....:
  6. TypeError: data type "category" not understood
  7. In [249]: dtype = pd.Categorical(["a"]).dtype
  8. In [250]: try:
  9. .....: np.dtype(dtype)
  10. .....: except TypeError as e:
  11. .....: print("TypeError:", str(e))
  12. .....:
  13. TypeError: data type not understood

Dtype comparisons work:

  1. In [251]: dtype == np.str_
  2. Out[251]: False
  3. In [252]: np.str_ == dtype
  4. Out[252]: False

To check if a Series contains Categorical data, use hasattr(s, 'cat'):

  1. In [253]: hasattr(pd.Series(['a'], dtype='category'), 'cat')
  2. Out[253]: True
  3. In [254]: hasattr(pd.Series(['a']), 'cat')
  4. Out[254]: False

Using NumPy functions on a Series of type category should not work as Categoricals are not numeric data (even in the case that .categories is numeric).

  1. In [255]: s = pd.Series(pd.Categorical([1, 2, 3, 4]))
  2. In [256]: try:
  3. .....: np.sum(s)
  4. .....: except TypeError as e:
  5. .....: print("TypeError:", str(e))
  6. .....:
  7. TypeError: Categorical cannot perform the operation sum

::: tip Note

If such a function works, please file a bug at https://github.com/pandas-dev/pandas!

:::

dtype in apply

Pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object dtype (same as getting a row -> getting one element will return a basic type) and applying along columns will also convert to object. NaN values are unaffected. You can use fillna to handle missing values before applying a function.

  1. In [257]: df = pd.DataFrame({"a": [1, 2, 3, 4],
  2. .....: "b": ["a", "b", "c", "d"],
  3. .....: "cats": pd.Categorical([1, 2, 3, 2])})
  4. .....:
  5. In [258]: df.apply(lambda row: type(row["cats"]), axis=1)
  6. Out[258]:
  7. 0 <class 'int'>
  8. 1 <class 'int'>
  9. 2 <class 'int'>
  10. 3 <class 'int'>
  11. dtype: object
  12. In [259]: df.apply(lambda col: col.dtype, axis=0)
  13. Out[259]:
  14. a int64
  15. b object
  16. cats category
  17. dtype: object

Categorical index

CategoricalIndex is a type of index that is useful for supporting indexing with duplicates. This is a container around a Categorical and allows efficient indexing and storage of an index with a large number of duplicated elements. See the advanced indexing docs for a more detailed explanation.

Setting the index will create a CategoricalIndex:

  1. In [260]: cats = pd.Categorical([1, 2, 3, 4], categories=[4, 2, 3, 1])
  2. In [261]: strings = ["a", "b", "c", "d"]
  3. In [262]: values = [4, 2, 3, 1]
  4. In [263]: df = pd.DataFrame({"strings": strings, "values": values}, index=cats)
  5. In [264]: df.index
  6. Out[264]: CategoricalIndex([1, 2, 3, 4], categories=[4, 2, 3, 1], ordered=False, dtype='category')
  7. # This now sorts by the categories order
  8. In [265]: df.sort_index()
  9. Out[265]:
  10. strings values
  11. 4 d 1
  12. 2 b 2
  13. 3 c 3
  14. 1 a 4

Side effects

Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to the Series will in most cases change the original Categorical:

  1. In [266]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
  2. In [267]: s = pd.Series(cat, name="cat")
  3. In [268]: cat
  4. Out[268]:
  5. [1, 2, 3, 10]
  6. Categories (5, int64): [1, 2, 3, 4, 10]
  7. In [269]: s.iloc[0:2] = 10
  8. In [270]: cat
  9. Out[270]:
  10. [10, 10, 3, 10]
  11. Categories (5, int64): [1, 2, 3, 4, 10]
  12. In [271]: df = pd.DataFrame(s)
  13. In [272]: df["cat"].cat.categories = [1, 2, 3, 4, 5]
  14. In [273]: cat
  15. Out[273]:
  16. [5, 5, 3, 5]
  17. Categories (5, int64): [1, 2, 3, 4, 5]

Use copy=True to prevent such a behaviour or simply don’t reuse Categoricals:

  1. In [274]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
  2. In [275]: s = pd.Series(cat, name="cat", copy=True)
  3. In [276]: cat
  4. Out[276]:
  5. [1, 2, 3, 10]
  6. Categories (5, int64): [1, 2, 3, 4, 10]
  7. In [277]: s.iloc[0:2] = 10
  8. In [278]: cat
  9. Out[278]:
  10. [1, 2, 3, 10]
  11. Categories (5, int64): [1, 2, 3, 4, 10]

::: tip Note

This also happens in some cases when you supply a NumPy array instead of a Categorical: using an int array (e.g. np.array([1,2,3,4])) will exhibit the same behavior, while using a string array (e.g. np.array(["a","b","c","a"])) will not.

:::