Frequently Asked Questions (FAQ)

DataFrame memory usage

The memory usage of a DataFrame (including the index) is shown when calling the info(). A configuration option, display.memory_usage (see the list of options), specifies if the DataFrame’s memory usage will be displayed when invoking the df.info() method.

For example, the memory usage of the DataFrame below is shown when calling info():

  1. In [1]: dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]',
  2. ...: 'complex128', 'object', 'bool']
  3. ...:
  4. In [2]: n = 5000
  5. In [3]: data = {t: np.random.randint(100, size=n).astype(t) for t in dtypes}
  6. In [4]: df = pd.DataFrame(data)
  7. In [5]: df['categorical'] = df['object'].astype('category')
  8. In [6]: df.info()
  9. <class 'pandas.core.frame.DataFrame'>
  10. RangeIndex: 5000 entries, 0 to 4999
  11. Data columns (total 8 columns):
  12. int64 5000 non-null int64
  13. float64 5000 non-null float64
  14. datetime64[ns] 5000 non-null datetime64[ns]
  15. timedelta64[ns] 5000 non-null timedelta64[ns]
  16. complex128 5000 non-null complex128
  17. object 5000 non-null object
  18. bool 5000 non-null bool
  19. categorical 5000 non-null category
  20. dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), timedelta64[ns](1)
  21. memory usage: 289.1+ KB

The + symbol indicates that the true memory usage could be higher, because pandas does not count the memory used by values in columns with dtype=object.

Passing memory_usage='deep' will enable a more accurate memory usage report, accounting for the full usage of the contained objects. This is optional as it can be expensive to do this deeper introspection.

  1. In [7]: df.info(memory_usage='deep')
  2. <class 'pandas.core.frame.DataFrame'>
  3. RangeIndex: 5000 entries, 0 to 4999
  4. Data columns (total 8 columns):
  5. int64 5000 non-null int64
  6. float64 5000 non-null float64
  7. datetime64[ns] 5000 non-null datetime64[ns]
  8. timedelta64[ns] 5000 non-null timedelta64[ns]
  9. complex128 5000 non-null complex128
  10. object 5000 non-null object
  11. bool 5000 non-null bool
  12. categorical 5000 non-null category
  13. dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), timedelta64[ns](1)
  14. memory usage: 425.6 KB

By default the display option is set to True but can be explicitly overridden by passing the memory_usage argument when invoking df.info().

The memory usage of each column can be found by calling the memory_usage() method. This returns a Series with an index represented by column names and memory usage of each column shown in bytes. For the DataFrame above, the memory usage of each column and the total memory usage can be found with the memory_usage method:

  1. In [8]: df.memory_usage()
  2. Out[8]:
  3. Index 128
  4. int64 40000
  5. float64 40000
  6. datetime64[ns] 40000
  7. timedelta64[ns] 40000
  8. complex128 80000
  9. object 40000
  10. bool 5000
  11. categorical 10920
  12. dtype: int64
  13. # total memory usage of dataframe
  14. In [9]: df.memory_usage().sum()
  15. Out[9]: 296048

By default the memory usage of the DataFrame’s index is shown in the returned Series, the memory usage of the index can be suppressed by passing the index=False argument:

  1. In [10]: df.memory_usage(index=False)
  2. Out[10]:
  3. int64 40000
  4. float64 40000
  5. datetime64[ns] 40000
  6. timedelta64[ns] 40000
  7. complex128 80000
  8. object 40000
  9. bool 5000
  10. categorical 10920
  11. dtype: int64

The memory usage displayed by the info() method utilizes the memory_usage() method to determine the memory usage of a DataFrame while also formatting the output in human-readable units (base-2 representation; i.e. 1KB = 1024 bytes).

See also Categorical Memory Usage.

Using if/truth statements with pandas

pandas follows the NumPy convention of raising an error when you try to convert something to a bool. This happens in an if-statement or when using the boolean operations: and, or, and not. It is not clear what the result of the following code should be:

  1. >>> if pd.Series([False, True, False]):
  2. ... pass

Should it be True because it’s not zero-length, or False because there are False values? It is unclear, so instead, pandas raises a ValueError:

  1. >>> if pd.Series([False, True, False]):
  2. ... print("I was true")
  3. Traceback
  4. ...
  5. ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

You need to explicitly choose what you want to do with the DataFrame, e.g. use any(), all() or empty(). Alternatively, you might want to compare if the pandas object is None:

  1. >>> if pd.Series([False, True, False]) is not None:
  2. ... print("I was not None")
  3. I was not None

Below is how to check if any of the values are True:

  1. >>> if pd.Series([False, True, False]).any():
  2. ... print("I am any")
  3. I am any

To evaluate single-element pandas objects in a boolean context, use the method bool():

  1. In [11]: pd.Series([True]).bool()
  2. Out[11]: True
  3. In [12]: pd.Series([False]).bool()
  4. Out[12]: False
  5. In [13]: pd.DataFrame([[True]]).bool()
  6. Out[13]: True
  7. In [14]: pd.DataFrame([[False]]).bool()
  8. Out[14]: False

Bitwise boolean

Bitwise boolean operators like == and != return a boolean Series, which is almost always what you want anyways.

  1. >>> s = pd.Series(range(5))
  2. >>> s == 4
  3. 0 False
  4. 1 False
  5. 2 False
  6. 3 False
  7. 4 True
  8. dtype: bool

See boolean comparisons for more examples.

Using the in operator

Using the Python in operator on a Series tests for membership in the index, not membership among the values.

  1. In [15]: s = pd.Series(range(5), index=list('abcde'))
  2. In [16]: 2 in s
  3. Out[16]: False
  4. In [17]: 'b' in s
  5. Out[17]: True

If this behavior is surprising, keep in mind that using in on a Python dictionary tests keys, not values, and Series are dict-like. To test for membership in the values, use the method isin():

  1. In [18]: s.isin([2])
  2. Out[18]:
  3. a False
  4. b False
  5. c True
  6. d False
  7. e False
  8. dtype: bool
  9. In [19]: s.isin([2]).any()
  10. Out[19]: True

For DataFrames, likewise, in applies to the column axis, testing for membership in the list of column names.

NaN, Integer NA values and NA type promotions

Choice of NA representation

For lack of NA (missing) support from the ground up in NumPy and Python in general, we were given the difficult choice between either:

  • A masked array solution: an array of data and an array of boolean values indicating whether a value is there or is missing.
  • Using a special sentinel value, bit pattern, or set of sentinel values to denote NA across the dtypes.

For many reasons we chose the latter. After years of production use it has proven, at least in my opinion, to be the best decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used everywhere as the NA value, and there are API functions isna and notna which can be used across the dtypes to detect NA values.

However, it comes with it a couple of trade-offs which I most certainly have not ignored.

Support for integer NA

In the absence of high performance NA support being built into NumPy from the ground up, the primary casualty is the ability to represent NAs in integer arrays. For example:

  1. In [20]: s = pd.Series([1, 2, 3, 4, 5], index=list('abcde'))
  2. In [21]: s
  3. Out[21]:
  4. a 1
  5. b 2
  6. c 3
  7. d 4
  8. e 5
  9. dtype: int64
  10. In [22]: s.dtype
  11. Out[22]: dtype('int64')
  12. In [23]: s2 = s.reindex(['a', 'b', 'c', 'f', 'u'])
  13. In [24]: s2
  14. Out[24]:
  15. a 1.0
  16. b 2.0
  17. c 3.0
  18. f NaN
  19. u NaN
  20. dtype: float64
  21. In [25]: s2.dtype
  22. Out[25]: dtype('float64')

This trade-off is made largely for memory and performance reasons, and also so that the resulting Series continues to be “numeric”.

If you need to represent integers with possibly missing values, use one of the nullable-integer extension dtypes provided by pandas

  1. In [26]: s_int = pd.Series([1, 2, 3, 4, 5], index=list('abcde'),
  2. ....: dtype=pd.Int64Dtype())
  3. ....:
  4. In [27]: s_int
  5. Out[27]:
  6. a 1
  7. b 2
  8. c 3
  9. d 4
  10. e 5
  11. dtype: Int64
  12. In [28]: s_int.dtype
  13. Out[28]: Int64Dtype()
  14. In [29]: s2_int = s_int.reindex(['a', 'b', 'c', 'f', 'u'])
  15. In [30]: s2_int
  16. Out[30]:
  17. a 1
  18. b 2
  19. c 3
  20. f NaN
  21. u NaN
  22. dtype: Int64
  23. In [31]: s2_int.dtype
  24. Out[31]: Int64Dtype()

See Nullable integer data type for more.

NA type promotions

When introducing NAs into an existing Series or DataFrame via reindex() or some other means, boolean and integer types will be promoted to a different dtype in order to store the NAs. The promotions are summarized in this table:

Typeclass Promotion dtype for storing NAs
floating no change
object no change
integer cast to float64
boolean cast to object

While this may seem like a heavy trade-off, I have found very few cases where this is an issue in practice i.e. storing values greater than 2**53. Some explanation for the motivation is in the next section.

Why not make NumPy like R?

Many people have suggested that NumPy should simply emulate the NA support present in the more domain-specific statistical programming language R. Part of the reason is the NumPy type hierarchy:

Typeclass Dtypes
numpy.floating float16, float32, float64, float128
numpy.integer int8, int16, int32, int64
numpy.unsignedinteger uint8, uint16, uint32, uint64
numpy.object_ object_
numpy.bool_ bool_
numpy.character string, unicode

The R language, by contrast, only has a handful of built-in data types: integer, numeric (floating-point), character, and boolean. NA types are implemented by reserving special bit patterns for each type to be used as the missing value. While doing this with the full NumPy type hierarchy would be possible, it would be a more substantial trade-off (especially for the 8- and 16-bit data types) and implementation undertaking.

An alternate approach is that of using masked arrays. A masked array is an array of data with an associated boolean mask denoting whether each value should be considered NA or not. I am personally not in love with this approach as I feel that overall it places a fairly heavy burden on the user and the library implementer. Additionally, it exacts a fairly high performance cost when working with numerical data compared with the simple approach of using NaN. Thus, I have chosen the Pythonic “practicality beats purity” approach and traded integer NA capability for a much simpler approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when NAs must be introduced.

Differences with NumPy

For Series and DataFrame objects, var() normalizes by N-1 to produce unbiased estimates of the sample variance, while NumPy’s var normalizes by N, which measures the variance of the sample. Note that cov() normalizes by N-1 in both pandas and NumPy.

Thread-safety

As of pandas 0.11, pandas is not 100% thread safe. The known issues relate to the copy() method. If you are doing a lot of copying of DataFrame objects shared among threads, we recommend holding locks inside the threads where the data copying occurs.

See this link for more information.

Byte-Ordering issues

Occasionally you may have to deal with data that were created on a machine with a different byte order than the one on which you are running Python. A common symptom of this issue is an error like::

  1. Traceback
  2. ...
  3. ValueError: Big-endian buffer not supported on little-endian compiler

To deal with this issue you should convert the underlying NumPy array to the native system byte order before passing it to Series or DataFrame constructors using something similar to the following:

  1. In [32]: x = np.array(list(range(10)), '>i4') # big endian
  2. In [33]: newx = x.byteswap().newbyteorder() # force native byteorder
  3. In [34]: s = pd.Series(newx)

See the NumPy documentation on byte order for more details.