Sparse data structures

::: tip Note

SparseSeries and SparseDataFrame have been deprecated. Their purpose is served equally well by a Series or DataFrame with sparse values. See Migrating for tips on migrating.

:::

Pandas provides data structures for efficiently storing sparse data. These are not necessarily sparse in the typical “mostly 0”. Rather, you can view these objects as being “compressed” where any data matching a specific value (NaN / missing value, though any value can be chosen, including 0) is omitted. The compressed values are not actually stored in the array.

  1. In [1]: arr = np.random.randn(10)
  2. In [2]: arr[2:-2] = np.nan
  3. In [3]: ts = pd.Series(pd.SparseArray(arr))
  4. In [4]: ts
  5. Out[4]:
  6. 0 0.469112
  7. 1 -0.282863
  8. 2 NaN
  9. 3 NaN
  10. 4 NaN
  11. 5 NaN
  12. 6 NaN
  13. 7 NaN
  14. 8 -0.861849
  15. 9 -2.104569
  16. dtype: Sparse[float64, nan]

Notice the dtype, Sparse[float64, nan]. The nan means that elements in the array that are nan aren’t actually stored, only the non-nan elements are. Those non-nan elements have a float64 dtype.

The sparse objects exist for memory efficiency reasons. Suppose you had a large, mostly NA DataFrame:

  1. In [5]: df = pd.DataFrame(np.random.randn(10000, 4))
  2. In [6]: df.iloc[:9998] = np.nan
  3. In [7]: sdf = df.astype(pd.SparseDtype("float", np.nan))
  4. In [8]: sdf.head()
  5. Out[8]:
  6. 0 1 2 3
  7. 0 NaN NaN NaN NaN
  8. 1 NaN NaN NaN NaN
  9. 2 NaN NaN NaN NaN
  10. 3 NaN NaN NaN NaN
  11. 4 NaN NaN NaN NaN
  12. In [9]: sdf.dtypes
  13. Out[9]:
  14. 0 Sparse[float64, nan]
  15. 1 Sparse[float64, nan]
  16. 2 Sparse[float64, nan]
  17. 3 Sparse[float64, nan]
  18. dtype: object
  19. In [10]: sdf.sparse.density
  20. Out[10]: 0.0002

As you can see, the density (% of values that have not been “compressed”) is extremely low. This sparse object takes up much less memory on disk (pickled) and in the Python interpreter.

  1. In [11]: 'dense : {:0.2f} bytes'.format(df.memory_usage().sum() / 1e3)
  2. Out[11]: 'dense : 320.13 bytes'
  3. In [12]: 'sparse: {:0.2f} bytes'.format(sdf.memory_usage().sum() / 1e3)
  4. Out[12]: 'sparse: 0.22 bytes'

Functionally, their behavior should be nearly identical to their dense counterparts.

SparseArray

SparseArray is a ExtensionArray for storing an array of sparse values (see dtypes for more on extension arrays). It is a 1-dimensional ndarray-like object storing only values distinct from the fill_value:

  1. In [13]: arr = np.random.randn(10)
  2. In [14]: arr[2:5] = np.nan
  3. In [15]: arr[7:8] = np.nan
  4. In [16]: sparr = pd.SparseArray(arr)
  5. In [17]: sparr
  6. Out[17]:
  7. [-1.9556635297215477, -1.6588664275960427, nan, nan, nan, 1.1589328886422277, 0.14529711373305043, nan, 0.6060271905134522, 1.3342113401317768]
  8. Fill: nan
  9. IntIndex
  10. Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)

A sparse array can be converted to a regular (dense) ndarray with numpy.asarray()

  1. In [18]: np.asarray(sparr)
  2. Out[18]:
  3. array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453,
  4. nan, 0.606 , 1.3342])

SparseDtype

The SparseArray.dtype property stores two pieces of information

  1. The dtype of the non-sparse values
  2. The scalar fill value
  1. In [19]: sparr.dtype
  2. Out[19]: Sparse[float64, nan]

A SparseDtype may be constructed by passing each of these

  1. In [20]: pd.SparseDtype(np.dtype('datetime64[ns]'))
  2. Out[20]: Sparse[datetime64[ns], NaT]

The default fill value for a given NumPy dtype is the “missing” value for that dtype, though it may be overridden.

  1. In [21]: pd.SparseDtype(np.dtype('datetime64[ns]'),
  2. ....: fill_value=pd.Timestamp('2017-01-01'))
  3. ....:
  4. Out[21]: Sparse[datetime64[ns], 2017-01-01 00:00:00]

Finally, the string alias 'Sparse[dtype]' may be used to specify a sparse dtype in many places

  1. In [22]: pd.array([1, 0, 0, 2], dtype='Sparse[int]')
  2. Out[22]:
  3. [1, 0, 0, 2]
  4. Fill: 0
  5. IntIndex
  6. Indices: array([0, 3], dtype=int32)

Sparse accessor

New in version 0.24.0.

Pandas provides a .sparse accessor, similar to .str for string data, .cat for categorical data, and .dt for datetime-like data. This namespace provides attributes and methods that are specific to sparse data.

  1. In [23]: s = pd.Series([0, 0, 1, 2], dtype="Sparse[int]")
  2. In [24]: s.sparse.density
  3. Out[24]: 0.5
  4. In [25]: s.sparse.fill_value
  5. Out[25]: 0

This accessor is available only on data with SparseDtype, and on the Series class itself for creating a Series with sparse data from a scipy COO matrix with.

New in version 0.25.0.

A .sparse accessor has been added for DataFrame as well. See Sparse accessor for more.

Sparse calculation

You can apply NumPy ufuncs to SparseArray and get a SparseArray as a result.

  1. In [26]: arr = pd.SparseArray([1., np.nan, np.nan, -2., np.nan])
  2. In [27]: np.abs(arr)
  3. Out[27]:
  4. [1.0, nan, nan, 2.0, nan]
  5. Fill: nan
  6. IntIndex
  7. Indices: array([0, 3], dtype=int32)

The ufunc is also applied to fill_value. This is needed to get the correct dense result.

  1. In [28]: arr = pd.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
  2. In [29]: np.abs(arr)
  3. Out[29]:
  4. [1.0, 1, 1, 2.0, 1]
  5. Fill: 1
  6. IntIndex
  7. Indices: array([0, 3], dtype=int32)
  8. In [30]: np.abs(arr).to_dense()
  9. Out[30]: array([1., 1., 1., 2., 1.])

Migrating

In older versions of pandas, the SparseSeries and SparseDataFrame classes (documented below) were the preferred way to work with sparse data. With the advent of extension arrays, these subclasses are no longer needed. Their purpose is better served by using a regular Series or DataFrame with sparse values instead.

::: tip Note

There’s no performance or memory penalty to using a Series or DataFrame with sparse values, rather than a SparseSeries or SparseDataFrame.

:::

This section provides some guidance on migrating your code to the new style. As a reminder, you can use the python warnings module to control warnings. But we recommend modifying your code, rather than ignoring the warning.

Construction

From an array-like, use the regular Series or DataFrame constructors with SparseArray values.

  1. # Previous way
  2. >>> pd.SparseDataFrame({"A": [0, 1]})
  1. # New way
  2. In [31]: pd.DataFrame({"A": pd.SparseArray([0, 1])})
  3. Out[31]:
  4. A
  5. 0 0
  6. 1 1

From a SciPy sparse matrix, use DataFrame.sparse.from_spmatrix(),

  1. # Previous way
  2. >>> from scipy import sparse
  3. >>> mat = sparse.eye(3)
  4. >>> df = pd.SparseDataFrame(mat, columns=['A', 'B', 'C'])
  1. # New way
  2. In [32]: from scipy import sparse
  3. In [33]: mat = sparse.eye(3)
  4. In [34]: df = pd.DataFrame.sparse.from_spmatrix(mat, columns=['A', 'B', 'C'])
  5. In [35]: df.dtypes
  6. Out[35]:
  7. A Sparse[float64, 0.0]
  8. B Sparse[float64, 0.0]
  9. C Sparse[float64, 0.0]
  10. dtype: object

Conversion

From sparse to dense, use the .sparse accessors

  1. In [36]: df.sparse.to_dense()
  2. Out[36]:
  3. A B C
  4. 0 1.0 0.0 0.0
  5. 1 0.0 1.0 0.0
  6. 2 0.0 0.0 1.0
  7. In [37]: df.sparse.to_coo()
  8. Out[37]:
  9. <3x3 sparse matrix of type '<class 'numpy.float64'>'
  10. with 3 stored elements in COOrdinate format>

From dense to sparse, use DataFrame.astype() with a SparseDtype.

  1. In [38]: dense = pd.DataFrame({"A": [1, 0, 0, 1]})
  2. In [39]: dtype = pd.SparseDtype(int, fill_value=0)
  3. In [40]: dense.astype(dtype)
  4. Out[40]:
  5. A
  6. 0 1
  7. 1 0
  8. 2 0
  9. 3 1

Sparse Properties

Sparse-specific properties, like density, are available on the .sparse accessor.

  1. In [41]: df.sparse.density
  2. Out[41]: 0.3333333333333333

General differences

In a SparseDataFrame, all columns were sparse. A DataFrame can have a mixture of sparse and dense columns. As a consequence, assigning new columns to a DataFrame with sparse values will not automatically convert the input to be sparse.

  1. # Previous Way
  2. >>> df = pd.SparseDataFrame({"A": [0, 1]})
  3. >>> df['B'] = [0, 0] # implicitly becomes Sparse
  4. >>> df['B'].dtype
  5. Sparse[int64, nan]

Instead, you’ll need to ensure that the values being assigned are sparse

  1. In [42]: df = pd.DataFrame({"A": pd.SparseArray([0, 1])})
  2. In [43]: df['B'] = [0, 0] # remains dense
  3. In [44]: df['B'].dtype
  4. Out[44]: dtype('int64')
  5. In [45]: df['B'] = pd.SparseArray([0, 0])
  6. In [46]: df['B'].dtype
  7. Out[46]: Sparse[int64, 0]

The SparseDataFrame.default_kind and SparseDataFrame.default_fill_value attributes have no replacement.

Interaction with scipy.sparse

Use DataFrame.sparse.from_spmatrix() to create a DataFrame with sparse values from a sparse matrix.

New in version 0.25.0.

  1. In [47]: from scipy.sparse import csr_matrix
  2. In [48]: arr = np.random.random(size=(1000, 5))
  3. In [49]: arr[arr < .9] = 0
  4. In [50]: sp_arr = csr_matrix(arr)
  5. In [51]: sp_arr
  6. Out[51]:
  7. <1000x5 sparse matrix of type '<class 'numpy.float64'>'
  8. with 517 stored elements in Compressed Sparse Row format>
  9. In [52]: sdf = pd.DataFrame.sparse.from_spmatrix(sp_arr)
  10. In [53]: sdf.head()
  11. Out[53]:
  12. 0 1 2 3 4
  13. 0 0.956380 0.0 0.0 0.000000 0.0
  14. 1 0.000000 0.0 0.0 0.000000 0.0
  15. 2 0.000000 0.0 0.0 0.000000 0.0
  16. 3 0.000000 0.0 0.0 0.000000 0.0
  17. 4 0.999552 0.0 0.0 0.956153 0.0
  18. In [54]: sdf.dtypes
  19. Out[54]:
  20. 0 Sparse[float64, 0.0]
  21. 1 Sparse[float64, 0.0]
  22. 2 Sparse[float64, 0.0]
  23. 3 Sparse[float64, 0.0]
  24. 4 Sparse[float64, 0.0]
  25. dtype: object

All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying data as needed. To convert back to sparse SciPy matrix in COO format, you can use the DataFrame.sparse.to_coo() method:

  1. In [55]: sdf.sparse.to_coo()
  2. Out[55]:
  3. <1000x5 sparse matrix of type '<class 'numpy.float64'>'
  4. with 517 stored elements in COOrdinate format>

meth:Series.sparse.to_coo is implemented for transforming a Series with sparse values indexed by a MultiIndex to a scipy.sparse.coo_matrix.

The method requires a MultiIndex with two or more levels.

  1. In [56]: s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
  2. In [57]: s.index = pd.MultiIndex.from_tuples([(1, 2, 'a', 0),
  3. ....: (1, 2, 'a', 1),
  4. ....: (1, 1, 'b', 0),
  5. ....: (1, 1, 'b', 1),
  6. ....: (2, 1, 'b', 0),
  7. ....: (2, 1, 'b', 1)],
  8. ....: names=['A', 'B', 'C', 'D'])
  9. ....:
  10. In [58]: s
  11. Out[58]:
  12. A B C D
  13. 1 2 a 0 3.0
  14. 1 NaN
  15. 1 b 0 1.0
  16. 1 3.0
  17. 2 1 b 0 NaN
  18. 1 NaN
  19. dtype: float64
  20. In [59]: ss = s.astype('Sparse')
  21. In [60]: ss
  22. Out[60]:
  23. A B C D
  24. 1 2 a 0 3.0
  25. 1 NaN
  26. 1 b 0 1.0
  27. 1 3.0
  28. 2 1 b 0 NaN
  29. 1 NaN
  30. dtype: Sparse[float64, nan]

In the example below, we transform the Series to a sparse representation of a 2-d array by specifying that the first and second MultiIndex levels define labels for the rows and the third and fourth levels define labels for the columns. We also specify that the column and row labels should be sorted in the final sparse representation.

  1. In [61]: A, rows, columns = ss.sparse.to_coo(row_levels=['A', 'B'],
  2. ....: column_levels=['C', 'D'],
  3. ....: sort_labels=True)
  4. ....:
  5. In [62]: A
  6. Out[62]:
  7. <3x4 sparse matrix of type '<class 'numpy.float64'>'
  8. with 3 stored elements in COOrdinate format>
  9. In [63]: A.todense()
  10. Out[63]:
  11. matrix([[0., 0., 1., 3.],
  12. [3., 0., 0., 0.],
  13. [0., 0., 0., 0.]])
  14. In [64]: rows
  15. Out[64]: [(1, 1), (1, 2), (2, 1)]
  16. In [65]: columns
  17. Out[65]: [('a', 0), ('a', 1), ('b', 0), ('b', 1)]

Specifying different row and column labels (and not sorting them) yields a different sparse matrix:

  1. In [66]: A, rows, columns = ss.sparse.to_coo(row_levels=['A', 'B', 'C'],
  2. ....: column_levels=['D'],
  3. ....: sort_labels=False)
  4. ....:
  5. In [67]: A
  6. Out[67]:
  7. <3x2 sparse matrix of type '<class 'numpy.float64'>'
  8. with 3 stored elements in COOrdinate format>
  9. In [68]: A.todense()
  10. Out[68]:
  11. matrix([[3., 0.],
  12. [1., 3.],
  13. [0., 0.]])
  14. In [69]: rows
  15. Out[69]: [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]
  16. In [70]: columns
  17. Out[70]: [0, 1]

A convenience method Series.sparse.from_coo() is implemented for creating a Series with sparse values from a scipy.sparse.coo_matrix.

  1. In [71]: from scipy import sparse
  2. In [72]: A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])),
  3. ....: shape=(3, 4))
  4. ....:
  5. In [73]: A
  6. Out[73]:
  7. <3x4 sparse matrix of type '<class 'numpy.float64'>'
  8. with 3 stored elements in COOrdinate format>
  9. In [74]: A.todense()
  10. Out[74]:
  11. matrix([[0., 0., 1., 2.],
  12. [3., 0., 0., 0.],
  13. [0., 0., 0., 0.]])

The default behaviour (with dense_index=False) simply returns a Series containing only the non-null entries.

  1. In [75]: ss = pd.Series.sparse.from_coo(A)
  2. In [76]: ss
  3. Out[76]:
  4. 0 2 1.0
  5. 3 2.0
  6. 1 0 3.0
  7. dtype: Sparse[float64, nan]

Specifying dense_index=True will result in an index that is the Cartesian product of the row and columns coordinates of the matrix. Note that this will consume a significant amount of memory (relative to dense_index=False) if the sparse matrix is large (and sparse) enough.

  1. In [77]: ss_dense = pd.Series.sparse.from_coo(A, dense_index=True)
  2. In [78]: ss_dense
  3. Out[78]:
  4. 0 0 NaN
  5. 1 NaN
  6. 2 1.0
  7. 3 2.0
  8. 1 0 3.0
  9. 1 NaN
  10. 2 NaN
  11. 3 NaN
  12. 2 0 NaN
  13. 1 NaN
  14. 2 NaN
  15. 3 NaN
  16. dtype: Sparse[float64, nan]

Sparse subclasses

The SparseSeries and SparseDataFrame classes are deprecated. Visit their API pages for usage.