meta:

  • name: keywords content: pandas,版本特性
  • name: description content: 从0.25.x系列版本开始,Pandas仅支持Python 3.5.3及更高版本。有关更多详细信息,请参见计划移除对Python 2.7的支持。

v0.25.0 版本特性(2019年7月18日)

::: danger 警告

从0.25.x系列版本开始,Pandas仅支持Python 3.5.3及更高版本。有关更多详细信息,请参见计划移除对Python 2.7的支持

:::

::: danger 警告

在未来的版本中,支持的最低Python版本将提高到3.6。

:::

::: danger 警告

面板(Panel) 已完全删除。对于N-D标记的数据结构,请使用 xarray

:::

::: danger 警告

read_pickle()read_msgpack()方法仅保证向后兼容的 Pandas 版本为0.20.3(GH27082)。

:::

这些是 Pandas v0.25.0 版本的改变。有关完整的更新日志(包括其他版本的Pandas),请参见发布日志

增强

具有重新标记的Groupby聚合

Pandas添加了特殊的groupby行为,称为“命名聚合”,用于在将多个聚合函数应用于特定列时命名输出列(GH18366, GH26512)。

  1. In [1]: animals = pd.DataFrame({'kind': ['cat', 'dog', 'cat', 'dog'],
  2. ...: 'height': [9.1, 6.0, 9.5, 34.0],
  3. ...: 'weight': [7.9, 7.5, 9.9, 198.0]})
  4. ...:
  5. In [2]: animals
  6. Out[2]:
  7. kind height weight
  8. 0 cat 9.1 7.9
  9. 1 dog 6.0 7.5
  10. 2 cat 9.5 9.9
  11. 3 dog 34.0 198.0
  12. [4 rows x 3 columns]
  13. In [3]: animals.groupby("kind").agg(
  14. ...: min_height=pd.NamedAgg(column='height', aggfunc='min'),
  15. ...: max_height=pd.NamedAgg(column='height', aggfunc='max'),
  16. ...: average_weight=pd.NamedAgg(column='weight', aggfunc=np.mean),
  17. ...: )
  18. ...:
  19. Out[3]:
  20. min_height max_height average_weight
  21. kind
  22. cat 9.1 9.5 8.90
  23. dog 6.0 34.0 102.75
  24. [2 rows x 3 columns]

将所需的列名称作为 **kwargs 传递给 .agg**kwargs 的值应该是元组,其中第一个元素是列选择,第二个元素是要应用的聚合函数。Pandas提供了pandas.NamedAgg (命名为元组),使函数的参数更清晰,但也接受了普通元组。

  1. In [4]: animals.groupby("kind").agg(
  2. ...: min_height=('height', 'min'),
  3. ...: max_height=('height', 'max'),
  4. ...: average_weight=('weight', np.mean),
  5. ...: )
  6. ...:
  7. Out[4]:
  8. min_height max_height average_weight
  9. kind
  10. cat 9.1 9.5 8.90
  11. dog 6.0 34.0 102.75
  12. [2 rows x 3 columns]

命名聚合是建议替代不推荐使用的 “dict-of-dicts” 方法来命名特定于列的聚合的输出(重命名时使用字典弃用groupby.agg())。

类似的方法现在也可用于Series Groupby对象。因为不需要选择列,所以值可以只是要应用的函数。

  1. In [5]: animals.groupby("kind").height.agg(
  2. ...: min_height="min",
  3. ...: max_height="max",
  4. ...: )
  5. ...:
  6. Out[5]:
  7. min_height max_height
  8. kind
  9. cat 9.1 9.5
  10. dog 6.0 34.0
  11. [2 rows x 2 columns]

在将dict传递给Series groupby聚合(重命名时使用字典时不推荐使用groupby.agg()方法)时,建议使用这种类型的聚合来替代不建议使用的方法和操作。

有关更多信息,请参见命名聚合

具有多个Lambda的Groupby聚合

您现在可以在 pandas.core.groupby.GroupBy.agg (GH26430) 中为类似列表的聚合提供多个lambda函数。

  1. In [6]: animals.groupby('kind').height.agg([
  2. ...: lambda x: x.iloc[0], lambda x: x.iloc[-1]
  3. ...: ])
  4. ...:
  5. Out[6]:
  6. <lambda_0> <lambda_1>
  7. kind
  8. cat 9.1 9.5
  9. dog 6.0 34.0
  10. [2 rows x 2 columns]
  11. In [7]: animals.groupby('kind').agg([
  12. ...: lambda x: x.iloc[0] - x.iloc[1],
  13. ...: lambda x: x.iloc[0] + x.iloc[1]
  14. ...: ])
  15. ...:
  16. Out[7]:
  17. height weight
  18. <lambda_0> <lambda_1> <lambda_0> <lambda_1>
  19. kind
  20. cat -0.4 18.6 -2.0 17.8
  21. dog -28.0 40.0 -190.5 205.5
  22. [2 rows x 4 columns]

以前的版本,这些行为会引发 SpecificationError 异常。

更好的多索引 repr

MultiIndex 实例的打印现在将会显示每行的元组数据,并确保元组项垂直对齐,因此现在更容易理解MultiIndex的结构。(GH13480):

repr现在看起来像这样:

  1. In [8]: pd.MultiIndex.from_product([['a', 'abc'], range(500)])
  2. Out[8]:
  3. MultiIndex([( 'a', 0),
  4. ( 'a', 1),
  5. ( 'a', 2),
  6. ( 'a', 3),
  7. ( 'a', 4),
  8. ( 'a', 5),
  9. ( 'a', 6),
  10. ( 'a', 7),
  11. ( 'a', 8),
  12. ( 'a', 9),
  13. ...
  14. ('abc', 490),
  15. ('abc', 491),
  16. ('abc', 492),
  17. ('abc', 493),
  18. ('abc', 494),
  19. ('abc', 495),
  20. ('abc', 496),
  21. ('abc', 497),
  22. ('abc', 498),
  23. ('abc', 499)],
  24. length=1000)

在以前的版本中,输出 MultiIndex 操作会打印MultiIndex的所有级别和代码,这在视觉和排版上没有吸引力,并使输出的内容更难以定位。例如(将范围限制为5):

  1. In [1]: pd.MultiIndex.from_product([['a', 'abc'], range(5)])
  2. Out[1]: MultiIndex(levels=[['a', 'abc'], [0, 1, 2, 3]],
  3. ...: codes=[[0, 0, 0, 0, 1, 1, 1, 1], [0, 1, 2, 3, 0, 1, 2, 3]])

在新的repr中,如果行数小于 options.display.max_seq_items(默认值:100个项目),则将显示所有值。水平方向上,如果输出比options.display.width 宽(默认值:80个字符),则输出将被截断。

用于Series和DataFrame的较短截断 repr

目前,pandas的默认显示选项确保当Series或DataFrame具有超过60行时,其repr将被截断为最多60行(display.max_rows选项)。 然而,这仍然给出一个占据垂直屏幕区域很大一部分的repr。 因此,引入了一个新选项 display.min_rows,默认值为10,它确定截断的repr中显示的行数:

  • 对于较小的 Series 或 DataFrame,最多显示 max_rows 数行 (默认值:60)。
  • 对于长度大于 max_rows 的长度较大的DataFrame Series,仅限显示 min_rows 数行(默认值:10,即第一个和最后一个5行)。

这个双重选项允许仍然可以看到相对较小的对象的全部内容(例如 df.head(20) 显示所有20行),同时为大对象提供简短的repr。

要恢复单个阈值的先前行为,请设置 pd.options.display.min_rows = None

使用max_level参数支持进行JSON规范化

json_normalize() normalizes the provided input dict to all nested levels. The new max_level parameter provides more control over which level to end normalization (GH23843):

The repr now looks like this:

  1. In [9]: from pandas.io.json import json_normalize
  2. In [10]: data = [{
  3. ....: 'CreatedBy': {'Name': 'User001'},
  4. ....: 'Lookup': {'TextField': 'Some text',
  5. ....: 'UserField': {'Id': 'ID001', 'Name': 'Name001'}},
  6. ....: 'Image': {'a': 'b'}
  7. ....: }]
  8. ....:
  9. In [11]: json_normalize(data, max_level=1)
  10. Out[11]:
  11. CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
  12. 0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
  13. [1 rows x 4 columns]

Series.explode 将类似列表的值拆分为行

Series and DataFrame have gained the DataFrame.explode() methods to transform list-likes to individual rows. See section on Exploding list-like column in docs for more information (GH16538, GH10511)

Here is a typical usecase. You have comma separated string in a column.

  1. In [12]: df = pd.DataFrame([{'var1': 'a,b,c', 'var2': 1},
  2. ....: {'var1': 'd,e,f', 'var2': 2}])
  3. ....:
  4. In [13]: df
  5. Out[13]:
  6. var1 var2
  7. 0 a,b,c 1
  8. 1 d,e,f 2
  9. [2 rows x 2 columns]

Creating a long form DataFrame is now straightforward using chained operations

  1. In [14]: df.assign(var1=df.var1.str.split(',')).explode('var1')
  2. Out[14]:
  3. var1 var2
  4. 0 a 1
  5. 0 b 1
  6. 0 c 1
  7. 1 d 2
  8. 1 e 2
  9. 1 f 2
  10. [6 rows x 2 columns]

其他增强功能

向后不兼容的API更改

使用UTC偏移量对日期字符串进行索引

Indexing a DataFrame or Series with a DatetimeIndex with a date string with a UTC offset would previously ignore the UTC offset. Now, the UTC offset is respected in indexing. (GH24076, GH16785)

  1. In [15]: df = pd.DataFrame([0], index=pd.DatetimeIndex(['2019-01-01'], tz='US/Pacific'))
  2. In [16]: df
  3. Out[16]:
  4. 0
  5. 2019-01-01 00:00:00-08:00 0
  6. [1 rows x 1 columns]

Previous behavior:

  1. In [3]: df['2019-01-01 00:00:00+04:00':'2019-01-01 01:00:00+04:00']
  2. Out[3]:
  3. 0
  4. 2019-01-01 00:00:00-08:00 0

New behavior:

  1. In [17]: df['2019-01-01 12:00:00+04:00':'2019-01-01 13:00:00+04:00']
  2. Out[17]:
  3. 0
  4. 2019-01-01 00:00:00-08:00 0
  5. [1 rows x 1 columns]

MultiIndex由级别和代码构造

Constructing a MultiIndex with NaN levels or codes value < -1 was allowed previously. Now, construction with codes value < -1 is not allowed and NaN levels’ corresponding codes would be reassigned as -1. (GH19387)

Previous behavior:

  1. In [1]: pd.MultiIndex(levels=[[np.nan, None, pd.NaT, 128, 2]],
  2. ...: codes=[[0, -1, 1, 2, 3, 4]])
  3. ...:
  4. Out[1]: MultiIndex(levels=[[nan, None, NaT, 128, 2]],
  5. codes=[[0, -1, 1, 2, 3, 4]])
  6. In [2]: pd.MultiIndex(levels=[[1, 2]], codes=[[0, -2]])
  7. Out[2]: MultiIndex(levels=[[1, 2]],
  8. codes=[[0, -2]])

New behavior:

  1. In [18]: pd.MultiIndex(levels=[[np.nan, None, pd.NaT, 128, 2]],
  2. ....: codes=[[0, -1, 1, 2, 3, 4]])
  3. ....:
  4. Out[18]:
  5. MultiIndex([(nan,),
  6. (nan,),
  7. (nan,),
  8. (nan,),
  9. (128,),
  10. ( 2,)],
  11. )
  12. In [19]: pd.MultiIndex(levels=[[1, 2]], codes=[[0, -2]])
  13. ---------------------------------------------------------------------------
  14. ValueError Traceback (most recent call last)
  15. <ipython-input-19-225a01af3975> in <module>
  16. ----> 1 pd.MultiIndex(levels=[[1, 2]], codes=[[0, -2]])
  17. /pandas/pandas/util/_decorators.py in wrapper(*args, **kwargs)
  18. 206 else:
  19. 207 kwargs[new_arg_name] = new_arg_value
  20. --> 208 return func(*args, **kwargs)
  21. 209
  22. 210 return wrapper
  23. /pandas/pandas/core/indexes/multi.py in __new__(cls, levels, codes, sortorder, names, dtype, copy, name, verify_integrity, _set_identity)
  24. 270
  25. 271 if verify_integrity:
  26. --> 272 new_codes = result._verify_integrity()
  27. 273 result._codes = new_codes
  28. 274
  29. /pandas/pandas/core/indexes/multi.py in _verify_integrity(self, codes, levels)
  30. 348 raise ValueError(
  31. 349 "On level {level}, code value ({code})"
  32. --> 350 " < -1".format(level=i, code=level_codes.min())
  33. 351 )
  34. 352 if not level.is_unique:
  35. ValueError: On level 0, code value (-2) < -1

DataFrame 上的 Groupby.apply 只对第一组求值一次

The implementation of DataFrameGroupBy.apply() previously evaluated the supplied function consistently twice on the first group to infer if it is safe to use a fast code path. Particularly for functions with side effects, this was an undesired behavior and may have led to surprises. (GH2936, GH2656, GH7739, GH10519, GH12155, GH20084, GH21417)

Now every group is evaluated only a single time.

  1. In [20]: df = pd.DataFrame({"a": ["x", "y"], "b": [1, 2]})
  2. In [21]: df
  3. Out[21]:
  4. a b
  5. 0 x 1
  6. 1 y 2
  7. [2 rows x 2 columns]
  8. In [22]: def func(group):
  9. ....: print(group.name)
  10. ....: return group
  11. ....:

Previous behavior:

  1. In [3]: df.groupby('a').apply(func)
  2. x
  3. x
  4. y
  5. Out[3]:
  6. a b
  7. 0 x 1
  8. 1 y 2

New behavior:

  1. In [23]: df.groupby("a").apply(func)
  2. x
  3. y
  4. Out[23]:
  5. a b
  6. 0 x 1
  7. 1 y 2
  8. [2 rows x 2 columns]

连接稀疏值

When passed DataFrames whose values are sparse, concat() will now return a Series or DataFrame with sparse values, rather than a SparseDataFrame (GH25702).

  1. In [24]: df = pd.DataFrame({"A": pd.SparseArray([0, 1])})

Previous behavior:

  1. In [2]: type(pd.concat([df, df]))
  2. pandas.core.sparse.frame.SparseDataFrame

New behavior:

  1. In [25]: type(pd.concat([df, df]))
  2. Out[25]: pandas.core.frame.DataFrame

This now matches the existing behavior of concat on Series with sparse values. concat() will continue to return a SparseDataFrame when all the values are instances of SparseDataFrame.

This change also affects routines using concat() internally, like get_dummies(), which now returns a DataFrame in all cases (previously a SparseDataFrame was returned if all the columns were dummy encoded, and a DataFrame otherwise).

Providing any SparseSeries or SparseDataFrame to concat() will cause a SparseSeries or SparseDataFrame to be returned, as before.

`.str``-访问器执行更严格的类型检查

Due to the lack of more fine-grained dtypes, Series.str so far only checked whether the data was of object dtype. Series.str will now infer the dtype data within the Series; in particular, 'bytes'-only data will raise an exception (except for Series.str.decode(), Series.str.get(), Series.str.len(), Series.str.slice()), see GH23163, GH23011, GH23551.

Previous behavior:

  1. In [1]: s = pd.Series(np.array(['a', 'ba', 'cba'], 'S'), dtype=object)
  2. In [2]: s
  3. Out[2]:
  4. 0 b'a'
  5. 1 b'ba'
  6. 2 b'cba'
  7. dtype: object
  8. In [3]: s.str.startswith(b'a')
  9. Out[3]:
  10. 0 True
  11. 1 False
  12. 2 False
  13. dtype: bool

New behavior:

  1. In [26]: s = pd.Series(np.array(['a', 'ba', 'cba'], 'S'), dtype=object)
  2. In [27]: s
  3. Out[27]:
  4. 0 b'a'
  5. 1 b'ba'
  6. 2 b'cba'
  7. Length: 3, dtype: object
  8. In [28]: s.str.startswith(b'a')
  9. ---------------------------------------------------------------------------
  10. TypeError Traceback (most recent call last)
  11. <ipython-input-28-ac784692b361> in <module>
  12. ----> 1 s.str.startswith(b'a')
  13. /pandas/pandas/core/strings.py in wrapper(self, *args, **kwargs)
  14. 1840 )
  15. 1841 )
  16. -> 1842 raise TypeError(msg)
  17. 1843 return func(self, *args, **kwargs)
  18. 1844
  19. TypeError: Cannot use .str.startswith with values of inferred dtype 'bytes'.

在groupby期间保留分类dtypes

Previously, columns that were categorical, but not the groupby key(s) would be converted to object dtype during groupby operations. Pandas now will preserve these dtypes. (GH18502)

  1. In [29]: cat = pd.Categorical(["foo", "bar", "bar", "qux"], ordered=True)
  2. In [30]: df = pd.DataFrame({'payload': [-1, -2, -1, -2], 'col': cat})
  3. In [31]: df
  4. Out[31]:
  5. payload col
  6. 0 -1 foo
  7. 1 -2 bar
  8. 2 -1 bar
  9. 3 -2 qux
  10. [4 rows x 2 columns]
  11. In [32]: df.dtypes
  12. Out[32]:
  13. payload int64
  14. col category
  15. Length: 2, dtype: object

Previous Behavior:

  1. In [5]: df.groupby('payload').first().col.dtype
  2. Out[5]: dtype('O')

New Behavior:

  1. In [33]: df.groupby('payload').first().col.dtype
  2. Out[33]: CategoricalDtype(categories=['bar', 'foo', 'qux'], ordered=True)

不兼容的索引类型联合

When performing Index.union() operations between objects of incompatible dtypes, the result will be a base Index of dtype object. This behavior holds true for unions between Index objects that previously would have been prohibited. The dtype of empty Index objects will now be evaluated before performing union operations rather than simply returning the other Index object. Index.union() can now be considered commutative, such that A.union(B) == B.union(A) (GH23525).

Previous behavior:

  1. In [1]: pd.period_range('19910905', periods=2).union(pd.Int64Index([1, 2, 3]))
  2. ...
  3. ValueError: can only call with other PeriodIndex-ed objects
  4. In [2]: pd.Index([], dtype=object).union(pd.Index([1, 2, 3]))
  5. Out[2]: Int64Index([1, 2, 3], dtype='int64')

New behavior:

  1. In [34]: pd.period_range('19910905', periods=2).union(pd.Int64Index([1, 2, 3]))
  2. Out[34]: Index([1991-09-05, 1991-09-06, 1, 2, 3], dtype='object')
  3. In [35]: pd.Index([], dtype=object).union(pd.Index([1, 2, 3]))
  4. Out[35]: Index([1, 2, 3], dtype='object')

Note that integer- and floating-dtype indexes are considered “compatible”. The integer values are coerced to floating point, which may result in loss of precision. See Set operations on Index objects for more.

DataFrame groupby ffill/bfill不再返回组标签

The methods ffill, bfill, pad and backfill of DataFrameGroupBy previously included the group labels in the return value, which was inconsistent with other groupby transforms. Now only the filled values are returned. (GH21521)

  1. In [36]: df = pd.DataFrame({"a": ["x", "y"], "b": [1, 2]})
  2. In [37]: df
  3. Out[37]:
  4. a b
  5. 0 x 1
  6. 1 y 2
  7. [2 rows x 2 columns]

Previous behavior:

  1. In [3]: df.groupby("a").ffill()
  2. Out[3]:
  3. a b
  4. 0 x 1
  5. 1 y 2

New behavior:

  1. In [38]: df.groupby("a").ffill()
  2. Out[38]:
  3. b
  4. 0 1
  5. 1 2
  6. [2 rows x 1 columns]

DataFrame 在空的分类/对象列上描述将返回top和freq

When calling DataFrame.describe() with an empty categorical / object column, the ‘top’ and ‘freq’ columns were previously omitted, which was inconsistent with the output for non-empty columns. Now the ‘top’ and ‘freq’ columns will always be included, with numpy.nan in the case of an empty DataFrame (GH26397)

  1. In [39]: df = pd.DataFrame({"empty_col": pd.Categorical([])})
  2. In [40]: df
  3. Out[40]:
  4. Empty DataFrame
  5. Columns: [empty_col]
  6. Index: []
  7. [0 rows x 1 columns]

Previous behavior:

  1. In [3]: df.describe()
  2. Out[3]:
  3. empty_col
  4. count 0
  5. unique 0

New behavior:

  1. In [41]: df.describe()
  2. Out[41]:
  3. empty_col
  4. count 0
  5. unique 0
  6. top NaN
  7. freq NaN
  8. [4 rows x 1 columns]

__str__方法现在调用__repr__而不是反之亦然

Pandas has until now mostly defined string representations in a Pandas objects’s __str__/__unicode__/__bytes__ methods, and called __str__ from the __repr__ method, if a specific __repr__ method is not found. This is not needed for Python3. In Pandas 0.25, the string representations of Pandas objects are now generally defined in __repr__, and calls to __str__ in general now pass the call on to the __repr__, if a specific __str__ method doesn’t exist, as is standard for Python. This change is backward compatible for direct usage of Pandas, but if you subclass Pandas objects and give your subclasses specific __str__/__repr__ methods, you may have to adjust your __str__/__repr__ methods (GH26495).

使用Interval对象索引IntervalIndex

Indexing methods for IntervalIndex have been modified to require exact matches only for Interval queries. IntervalIndex methods previously matched on any overlapping Interval. Behavior with scalar points, e.g. querying with an integer, is unchanged (GH16316).

  1. In [42]: ii = pd.IntervalIndex.from_tuples([(0, 4), (1, 5), (5, 8)])
  2. In [43]: ii
  3. Out[43]:
  4. IntervalIndex([(0, 4], (1, 5], (5, 8]],
  5. closed='right',
  6. dtype='interval[int64]')

The in operator (__contains__) now only returns True for exact matches to Intervals in the IntervalIndex, whereas this would previously return True for any Interval overlapping an Interval in the IntervalIndex.

Previous behavior:

  1. In [4]: pd.Interval(1, 2, closed='neither') in ii
  2. Out[4]: True
  3. In [5]: pd.Interval(-10, 10, closed='both') in ii
  4. Out[5]: True

New behavior:

  1. In [44]: pd.Interval(1, 2, closed='neither') in ii
  2. Out[44]: False
  3. In [45]: pd.Interval(-10, 10, closed='both') in ii
  4. Out[45]: False

The get_loc() method now only returns locations for exact matches to Interval queries, as opposed to the previous behavior of returning locations for overlapping matches. A KeyError will be raised if an exact match is not found.

Previous behavior:

  1. In [6]: ii.get_loc(pd.Interval(1, 5))
  2. Out[6]: array([0, 1])
  3. In [7]: ii.get_loc(pd.Interval(2, 6))
  4. Out[7]: array([0, 1, 2])

New behavior:

  1. In [6]: ii.get_loc(pd.Interval(1, 5))
  2. Out[6]: 1
  3. In [7]: ii.get_loc(pd.Interval(2, 6))
  4. ---------------------------------------------------------------------------
  5. KeyError: Interval(2, 6, closed='right')

Likewise, get_indexer() and get_indexer_non_unique() will also only return locations for exact matches to Interval queries, with -1 denoting that an exact match was not found.

These indexing changes extend to querying a Series or DataFrame with an IntervalIndex index.

  1. In [46]: s = pd.Series(list('abc'), index=ii)
  2. In [47]: s
  3. Out[47]:
  4. (0, 4] a
  5. (1, 5] b
  6. (5, 8] c
  7. Length: 3, dtype: object

Selecting from a Series or DataFrame using [] (__getitem__) or loc now only returns exact matches for Interval queries.

Previous behavior:

  1. In [8]: s[pd.Interval(1, 5)]
  2. Out[8]:
  3. (0, 4] a
  4. (1, 5] b
  5. dtype: object
  6. In [9]: s.loc[pd.Interval(1, 5)]
  7. Out[9]:
  8. (0, 4] a
  9. (1, 5] b
  10. dtype: object

New behavior:

  1. In [48]: s[pd.Interval(1, 5)]
  2. Out[48]: 'b'
  3. In [49]: s.loc[pd.Interval(1, 5)]
  4. Out[49]: 'b'

Similarly, a KeyError will be raised for non-exact matches instead of returning overlapping matches.

Previous behavior:

  1. In [9]: s[pd.Interval(2, 3)]
  2. Out[9]:
  3. (0, 4] a
  4. (1, 5] b
  5. dtype: object
  6. In [10]: s.loc[pd.Interval(2, 3)]
  7. Out[10]:
  8. (0, 4] a
  9. (1, 5] b
  10. dtype: object

New behavior:

  1. In [6]: s[pd.Interval(2, 3)]
  2. ---------------------------------------------------------------------------
  3. KeyError: Interval(2, 3, closed='right')
  4. In [7]: s.loc[pd.Interval(2, 3)]
  5. ---------------------------------------------------------------------------
  6. KeyError: Interval(2, 3, closed='right')

The overlaps() method can be used to create a boolean indexer that replicates the previous behavior of returning overlapping matches.

New behavior:

  1. In [50]: idxr = s.index.overlaps(pd.Interval(2, 3))
  2. In [51]: idxr
  3. Out[51]: array([ True, True, False])
  4. In [52]: s[idxr]
  5. Out[52]:
  6. (0, 4] a
  7. (1, 5] b
  8. Length: 2, dtype: object
  9. In [53]: s.loc[idxr]
  10. Out[53]:
  11. (0, 4] a
  12. (1, 5] b
  13. Length: 2, dtype: object

Series 上的二进制ufunc现在对齐

Applying a binary ufunc like numpy.power() now aligns the inputs when both are Series (GH23293).

  1. In [54]: s1 = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
  2. In [55]: s2 = pd.Series([3, 4, 5], index=['d', 'c', 'b'])
  3. In [56]: s1
  4. Out[56]:
  5. a 1
  6. b 2
  7. c 3
  8. Length: 3, dtype: int64
  9. In [57]: s2
  10. Out[57]:
  11. d 3
  12. c 4
  13. b 5
  14. Length: 3, dtype: int64

Previous behavior

  1. In [5]: np.power(s1, s2)
  2. Out[5]:
  3. a 1
  4. b 16
  5. c 243
  6. dtype: int64

New behavior

  1. In [58]: np.power(s1, s2)
  2. Out[58]:
  3. a 1.0
  4. b 32.0
  5. c 81.0
  6. d NaN
  7. Length: 4, dtype: float64

This matches the behavior of other binary operations in pandas, like Series.add(). To retain the previous behavior, convert the other Series to an array before applying the ufunc.

  1. In [59]: np.power(s1, s2.array)
  2. Out[59]:
  3. a 1
  4. b 16
  5. c 243
  6. Length: 3, dtype: int64

Categorical.argsort现在在最后放置缺失值

Categorical.argsort() now places missing values at the end of the array, making it consistent with NumPy and the rest of pandas (GH21801).

  1. In [60]: cat = pd.Categorical(['b', None, 'a'], categories=['a', 'b'], ordered=True)

Previous behavior

  1. In [2]: cat = pd.Categorical(['b', None, 'a'], categories=['a', 'b'], ordered=True)
  2. In [3]: cat.argsort()
  3. Out[3]: array([1, 2, 0])
  4. In [4]: cat[cat.argsort()]
  5. Out[4]:
  6. [NaN, a, b]
  7. categories (2, object): [a < b]

New behavior

  1. In [61]: cat.argsort()
  2. Out[61]: array([2, 0, 1])
  3. In [62]: cat[cat.argsort()]
  4. Out[62]:
  5. [a, b, NaN]
  6. Categories (2, object): [a < b]

将字典列表传递给DataFrame时,将保留列顺序

Starting with Python 3.7 the key-order of dict is guaranteed. In practice, this has been true since Python 3.6. The DataFrame constructor now treats a list of dicts in the same way as it does a list of OrderedDict, i.e. preserving the order of the dicts. This change applies only when pandas is running on Python>=3.6 (GH27309).

  1. In [63]: data = [
  2. ....: {'name': 'Joe', 'state': 'NY', 'age': 18},
  3. ....: {'name': 'Jane', 'state': 'KY', 'age': 19, 'hobby': 'Minecraft'},
  4. ....: {'name': 'Jean', 'state': 'OK', 'age': 20, 'finances': 'good'}
  5. ....: ]
  6. ....:

Previous Behavior:

The columns were lexicographically sorted previously,

  1. In [1]: pd.DataFrame(data)
  2. Out[1]:
  3. age finances hobby name state
  4. 0 18 NaN NaN Joe NY
  5. 1 19 NaN Minecraft Jane KY
  6. 2 20 good NaN Jean OK

New Behavior:

The column order now matches the insertion-order of the keys in the dict, considering all the records from top to bottom. As a consequence, the column order of the resulting DataFrame has changed compared to previous pandas verisons.

  1. In [64]: pd.DataFrame(data)
  2. Out[64]:
  3. name state age hobby finances
  4. 0 Joe NY 18 NaN NaN
  5. 1 Jane KY 19 Minecraft NaN
  6. 2 Jean OK 20 NaN good
  7. [3 rows x 5 columns]

增加了依赖项的最低版本

Due to dropping support for Python 2.7, a number of optional dependencies have updated minimum versions (GH25725, GH24942, GH25752). Independently, some minimum supported versions of dependencies were updated (GH23519, GH25554). If installed, we now require:

Package Minimum Version Required
numpy 1.13.3 X
pytz 2015.4 X
python-dateutil 2.6.1 X
bottleneck 1.2.1
numexpr 2.6.2
pytest (dev) 4.0.2

For optional libraries the general recommendation is to use the latest version. The following table lists the lowest version per library that is currently being tested throughout the development of pandas. Optional libraries below the lowest tested version may still work, but are not considered supported.

Package Minimum Version
beautifulsoup4 4.6.0
fastparquet 0.2.1
gcsfs 0.2.2
lxml 3.8.0
matplotlib 2.2.2
openpyxl 2.4.8
pyarrow 0.9.0
pymysql 0.7.1
pytables 3.4.2
scipy 0.19.0
sqlalchemy 1.1.4
xarray 0.8.2
xlrd 1.1.0
xlsxwriter 0.9.8
xlwt 1.2.0

See Dependencies and Optional dependencies for more.

其他API更改

弃用

Sparse的子类

The SparseSeries and SparseDataFrame subclasses are deprecated. Their functionality is better-provided by a Series or DataFrame with sparse values.

Previous way

  1. In [65]: df = pd.SparseDataFrame({"A": [0, 0, 1, 2]})
  2. In [66]: df.dtypes
  3. Out[66]:
  4. A Sparse[int64, nan]
  5. Length: 1, dtype: object

New way

  1. In [67]: df = pd.DataFrame({"A": pd.SparseArray([0, 0, 1, 2])})
  2. In [68]: df.dtypes
  3. Out[68]:
  4. A Sparse[int64, 0]
  5. Length: 1, dtype: object

The memory usage of the two approaches is identical. See Migrating for more (GH19239).

msgpack格式

The msgpack format is deprecated as of 0.25 and will be removed in a future version. It is recommended to use pyarrow for on-the-wire transmission of pandas objects. (GH27084)

其他弃用

删除先前版本的弃用/更改

性能改进

  • Significant speedup in SparseArray initialization that benefits most operations, fixing performance regression introduced in v0.20.0 (GH24985)
  • DataFrame.to_stata() is now faster when outputting data with any string or non-native endian columns (GH25045)
  • Improved performance of Series.searchsorted(). The speedup is especially large when the dtype is int8/int16/int32 and the searched key is within the integer bounds for the dtype (GH22034)
  • Improved performance of pandas.core.groupby.GroupBy.quantile() (GH20405)
  • Improved performance of slicing and other selected operation on a RangeIndex (GH26565, GH26617, GH26722)
  • RangeIndex now performs standard lookup without instantiating an actual hashtable, hence saving memory (GH16685)
  • Improved performance of read_csv() by faster tokenizing and faster parsing of small float numbers (GH25784)
  • Improved performance of read_csv() by faster parsing of N/A and boolean values (GH25804)
  • Improved performance of IntervalIndex.is_monotonic, IntervalIndex.is_monotonic_increasing and IntervalIndex.is_monotonic_decreasing by removing conversion to MultiIndex (GH24813)
  • Improved performance of DataFrame.to_csv() when writing datetime dtypes (GH25708)
  • Improved performance of read_csv() by much faster parsing of MM/YYYY and DD/MM/YYYY datetime formats (GH25922)
  • Improved performance of nanops for dtypes that cannot store NaNs. Speedup is particularly prominent for Series.all() and Series.any() (GH25070)
  • Improved performance of Series.map() for dictionary mappers on categorical series by mapping the categories instead of mapping all values (GH23785)
  • Improved performance of IntervalIndex.intersection() (GH24813)
  • Improved performance of read_csv() by faster concatenating date columns without extra conversion to string for integer/float zero and float NaN; by faster checking the string for the possibility of being a date (GH25754)
  • Improved performance of IntervalIndex.is_unique by removing conversion to MultiIndex (GH24813)
  • Restored performance of DatetimeIndex.__iter__() by re-enabling specialized code path (GH26702)
  • Improved performance when building MultiIndex with at least one CategoricalIndex level (GH22044)
  • Improved performance by removing the need for a garbage collect when checking for SettingWithCopyWarning (GH27031)
  • For to_datetime() changed default value of cache parameter to True (GH26043)
  • Improved performance of DatetimeIndex and PeriodIndex slicing given non-unique, monotonic data (GH27136).
  • Improved performance of pd.read_json() for index-oriented data. (GH26773)
  • Improved performance of MultiIndex.shape() (GH27384).

Bug修复

Categorical相关

和Datetime相关的

  • Bug in to_datetime() which would raise an (incorrect) ValueError when called with a date far into the future and the format argument specified instead of raising OutOfBoundsDatetime (GH23830)
  • Bug in to_datetime() which would raise InvalidIndexError: Reindexing only valid with uniquely valued Index objects when called with cache=True, with arg including at least two different elements from the set {None, numpy.nan, pandas.NaT} (GH22305)
  • Bug in DataFrame and Series where timezone aware data with dtype='datetime64[ns] was not cast to naive (GH25843)
  • Improved Timestamp type checking in various datetime functions to prevent exceptions when using a subclassed datetime (GH25851)
  • Bug in Series and DataFrame repr where np.datetime64('NaT') and np.timedelta64('NaT') with dtype=object would be represented as NaN (GH25445)
  • Bug in to_datetime() which does not replace the invalid argument with NaT when error is set to coerce (GH26122)
  • Bug in adding DateOffset with nonzero month to DatetimeIndex would raise ValueError (GH26258)
  • Bug in to_datetime() which raises unhandled OverflowError when called with mix of invalid dates and NaN values with format='%Y%m%d' and error='coerce' (GH25512)
  • Bug in isin() for datetimelike indexes; DatetimeIndex, TimedeltaIndex and PeriodIndex where the levels parameter was ignored. (GH26675)
  • Bug in to_datetime() which raises TypeError for format='%Y%m%d' when called for invalid integer dates with length >= 6 digits with errors='ignore'
  • Bug when comparing a PeriodIndex against a zero-dimensional numpy array (GH26689)
  • Bug in constructing a Series or DataFrame from a numpy datetime64 array with a non-ns unit and out-of-bound timestamps generating rubbish data, which will now correctly raise an OutOfBoundsDatetime error (GH26206).
  • Bug in date_range() with unnecessary OverflowError being raised for very large or very small dates (GH26651)
  • Bug where adding Timestamp to a np.timedelta64 object would raise instead of returning a Timestamp (GH24775)
  • Bug where comparing a zero-dimensional numpy array containing a np.datetime64 object to a Timestamp would incorrect raise TypeError (GH26916)
  • Bug in to_datetime() which would raise ValueError: Tz-aware datetime.datetime cannot be converted to datetime64 unless utc=True when called with cache=True, with arg including datetime strings with different offset (GH26097)

Timedelta相关

  • Bug in TimedeltaIndex.intersection() where for non-monotonic indices in some cases an empty Index was returned when in fact an intersection existed (GH25913)
  • Bug with comparisons between Timedelta and NaT raising TypeError (GH26039)
  • Bug when adding or subtracting a BusinessHour to a Timestamp with the resulting time landing in a following or prior day respectively (GH26381)
  • Bug when comparing a TimedeltaIndex against a zero-dimensional numpy array (GH26689)

Timezones相关

Numeric相关

  • Bug in to_numeric() in which large negative numbers were being improperly handled (GH24910)
  • Bug in to_numeric() in which numbers were being coerced to float, even though errors was not coerce (GH24910)
  • Bug in to_numeric() in which invalid values for errors were being allowed (GH26466)
  • Bug in format in which floating point complex numbers were not being formatted to proper display precision and trimming (GH25514)
  • Bug in error messages in DataFrame.corr() and Series.corr(). Added the possibility of using a callable. (GH25729)
  • Bug in Series.divmod() and Series.rdivmod() which would raise an (incorrect) ValueError rather than return a pair of Series objects as result (GH25557)
  • Raises a helpful exception when a non-numeric index is sent to interpolate() with methods which require numeric index. (GH21662)
  • Bug in eval() when comparing floats with scalar operators, for example: x < -0.1 (GH25928)
  • Fixed bug where casting all-boolean array to integer extension array failed (GH25211)
  • Bug in divmod with a Series object containing zeros incorrectly raising AttributeError (GH26987)
  • Inconsistency in Series floor-division (//) and divmod filling positive//zero with NaN instead of Inf (GH27321)

转换相关

字符串相关

“间隔”相关

索引相关

  • Improved exception message when calling DataFrame.iloc() with a list of non-numeric objects (GH25753).
  • Improved exception message when calling .iloc or .loc with a boolean indexer with different length (GH26658).
  • Bug in KeyError exception message when indexing a MultiIndex with a non-existant key not displaying the original key (GH27250).
  • Bug in .iloc and .loc with a boolean indexer not raising an IndexError when too few items are passed (GH26658).
  • Bug in DataFrame.loc() and Series.loc() where KeyError was not raised for a MultiIndex when the key was less than or equal to the number of levels in the MultiIndex (GH14885).
  • Bug in which DataFrame.append() produced an erroneous warning indicating that a KeyError will be thrown in the future when the data to be appended contains new columns (GH22252).
  • Bug in which DataFrame.to_csv() caused a segfault for a reindexed data frame, when the indices were single-level MultiIndex (GH26303).
  • Fixed bug where assigning a arrays.PandasArray to a pandas.core.frame.DataFrame would raise error (GH26390)
  • Allow keyword arguments for callable local reference used in the DataFrame.query() string (GH26426)
  • Fixed a KeyError when indexing a ``MultiIndex``` level with a list containing exactly one label, which is missing (GH27148)
  • Bug which produced AttributeError on partial matching Timestamp in a MultiIndex (GH26944)
  • Bug in Categorical and CategoricalIndex with Interval values when using the in operator (__contains) with objects that are not comparable to the values in the Interval (GH23705)
  • Bug in DataFrame.loc() and DataFrame.iloc() on a DataFrame with a single timezone-aware datetime64[ns] column incorrectly returning a scalar instead of a Series (GH27110)
  • Bug in CategoricalIndex and Categorical incorrectly raising ValueError instead of TypeError when a list is passed using the in operator (__contains__) (GH21729)
  • Bug in setting a new value in a Series with a Timedelta object incorrectly casting the value to an integer (GH22717)
  • Bug in Series setting a new key (__setitem__) with a timezone-aware datetime incorrectly raising ValueError (GH12862)
  • Bug in DataFrame.iloc() when indexing with a read-only indexer (GH17192)
  • Bug in Series setting an existing tuple key (__setitem__) with timezone-aware datetime values incorrectly raising TypeError (GH20441)

缺失(Missing)相关

多索引(MultiIndex)相关

输入输出(I/O)相关

  • Bug in DataFrame.to_html() where values were truncated using display options instead of outputting the full content (GH17004)
  • Fixed bug in missing text when using to_clipboard() if copying utf-16 characters in Python 3 on Windows (GH25040)
  • Bug in read_json() for orient='table' when it tries to infer dtypes by default, which is not applicable as dtypes are already defined in the JSON schema (GH21345)
  • Bug in read_json() for orient='table' and float index, as it infers index dtype by default, which is not applicable because index dtype is already defined in the JSON schema (GH25433)
  • Bug in read_json() for orient='table' and string of float column names, as it makes a column name type conversion to Timestamp, which is not applicable because column names are already defined in the JSON schema (GH25435)
  • Bug in json_normalize() for errors='ignore' where missing values in the input data, were filled in resulting DataFrame with the string "nan" instead of numpy.nan (GH25468)
  • DataFrame.to_html() now raises TypeError when using an invalid type for the classes parameter instead of AssertionError (GH25608)
  • Bug in DataFrame.to_string() and DataFrame.to_latex() that would lead to incorrect output when the header keyword is used (GH16718)
  • Bug in read_csv() not properly interpreting the UTF8 encoded filenames on Windows on Python 3.6+ (GH15086)
  • Improved performance in pandas.read_stata() and pandas.io.stata.StataReader when converting columns that have missing values (GH25772)
  • Bug in DataFrame.to_html() where header numbers would ignore display options when rounding (GH17280)
  • Bug in read_hdf() where reading a table from an HDF5 file written directly with PyTables fails with a ValueError when using a sub-selection via the start or stop arguments (GH11188)
  • Bug in read_hdf() not properly closing store after a KeyError is raised (GH25766)
  • Improved the explanation for the failure when value labels are repeated in Stata dta files and suggested work-arounds (GH25772)
  • Improved pandas.read_stata() and pandas.io.stata.StataReader to read incorrectly formatted 118 format files saved by Stata (GH25960)
  • Improved the col_space parameter in DataFrame.to_html() to accept a string so CSS length values can be set correctly (GH25941)
  • Fixed bug in loading objects from S3 that contain # characters in the URL (GH25945)
  • Adds use_bqstorage_api parameter to read_gbq() to speed up downloads of large data frames. This feature requires version 0.10.0 of the pandas-gbq library as well as the google-cloud-bigquery-storage and fastavro libraries. (GH26104)
  • Fixed memory leak in DataFrame.to_json() when dealing with numeric data (GH24889)
  • Bug in read_json() where date strings with Z were not converted to a UTC timezone (GH26168)
  • Added cache_dates=True parameter to read_csv(), which allows to cache unique dates when they are parsed (GH25990)
  • DataFrame.to_excel() now raises a ValueError when the caller’s dimensions exceed the limitations of Excel (GH26051)
  • Fixed bug in pandas.read_csv() where a BOM would result in incorrect parsing using engine=’python’ (GH26545)
  • read_excel() now raises a ValueError when input is of type pandas.io.excel.ExcelFile and engine param is passed since pandas.io.excel.ExcelFile has an engine defined (GH26566)
  • Bug while selecting from HDFStore with where='' specified (GH26610).
  • Fixed bug in DataFrame.to_excel() where custom objects (i.e. PeriodIndex) inside merged cells were not being converted into types safe for the Excel writer (GH27006)
  • Bug in read_hdf() where reading a timezone aware DatetimeIndex would raise a TypeError (GH11926)
  • Bug in to_msgpack() and read_msgpack() which would raise a ValueError rather than a FileNotFoundError for an invalid path (GH27160)
  • Fixed bug in DataFrame.to_parquet() which would raise a ValueError when the dataframe had no columns (GH27339)
  • Allow parsing of PeriodDtype columns when using read_csv() (GH26934)

绘图(Plotting)相关

分组/重采样/滚动

重塑(Reshaping)相关

  • Bug in pandas.merge() adds a string of None, if None is assigned in suffixes instead of remain the column name as-is (GH24782).
  • Bug in merge() when merging by index name would sometimes result in an incorrectly numbered index (missing index values are now assigned NA) (GH24212, GH25009)
  • to_records() now accepts dtypes to its column_dtypes parameter (GH24895)
  • Bug in concat() where order of OrderedDict (and dict in Python 3.6+) is not respected, when passed in as objs argument (GH21510)
  • Bug in pivot_table() where columns with NaN values are dropped even if dropna argument is False, when the aggfunc argument contains a list (GH22159)
  • Bug in concat() where the resulting freq of two DatetimeIndex with the same freq would be dropped (GH3232).
  • Bug in merge() where merging with equivalent Categorical dtypes was raising an error (GH22501)
  • bug in DataFrame instantiating with a dict of iterators or generators (e.g. pd.DataFrame({'A': reversed(range(3))})) raised an error (GH26349).
  • Bug in DataFrame instantiating with a range (e.g. pd.DataFrame(range(3))) raised an error (GH26342).
  • Bug in DataFrame constructor when passing non-empty tuples would cause a segmentation fault (GH25691)
  • Bug in Series.apply() failed when the series is a timezone aware DatetimeIndex (GH25959)
  • Bug in pandas.cut() where large bins could incorrectly raise an error due to an integer overflow (GH26045)
  • Bug in DataFrame.sort_index() where an error is thrown when a multi-indexed DataFrame is sorted on all levels with the initial level sorted last (GH26053)
  • Bug in Series.nlargest() treats True as smaller than False (GH26154)
  • Bug in DataFrame.pivot_table() with a IntervalIndex as pivot index would raise TypeError (GH25814)
  • Bug in which DataFrame.from_dict() ignored order of OrderedDict when orient='index' (GH8425).
  • Bug in DataFrame.transpose() where transposing a DataFrame with a timezone-aware datetime column would incorrectly raise ValueError (GH26825)
  • Bug in pivot_table() when pivoting a timezone aware column as the values would remove timezone information (GH14948)
  • Bug in merge_asof() when specifying multiple by columns where one is datetime64[ns, tz] dtype (GH26649)

零散(Sparse)

  • Significant speedup in SparseArray initialization that benefits most operations, fixing performance regression introduced in v0.20.0 (GH24985)
  • Bug in SparseFrame constructor where passing None as the data would cause default_fill_value to be ignored (GH16807)
  • Bug in SparseDataFrame when adding a column in which the length of values does not match length of index, AssertionError is raised instead of raising ValueError (GH25484)
  • Introduce a better error message in Series.sparse.from_coo() so it returns a TypeError for inputs that are not coo matrices (GH26554)
  • Bug in numpy.modf() on a SparseArray. Now a tuple of SparseArray is returned (GH26946).

构建相关更改

  • Fix install error with PyPy on macOS (GH26536)

扩展数组

  • Bug in factorize() when passing an ExtensionArray with a custom na_sentinel (GH25696).
  • Series.count() miscounts NA values in ExtensionArrays (GH26835)
  • Added Series.__array_ufunc__ to better handle NumPy ufuncs applied to Series backed by extension arrays (GH23293).
  • Keyword argument deep has been removed from ExtensionArray.copy() (GH27083)

其他

  • Removed unused C functions from vendored UltraJSON implementation (GH26198)
  • Allow Index and RangeIndex to be passed to numpy min and max functions (GH26125)
  • Use actual class name in repr of empty objects of a Series subclass (GH27001).
  • Bug in DataFrame where passing an object array of timezone-aware datetime objects would incorrectly raise ValueError (GH13287)

贡献者

(译者注:官方未公布)