Skip to content

Commit f3c9b37

Browse files
authored
DOC: Use more executed instead of static code blocks (#54282)
* DOC: Use more executed instead of static code blocks * change to except * Convert more code blocks * convert even more * okexcept * More fixes * Address more * another okexcept * Fix okexcept * address again * more fixes * fix merging
1 parent 309f9ef commit f3c9b37

File tree

10 files changed

+169
-282
lines changed

10 files changed

+169
-282
lines changed

doc/source/user_guide/advanced.rst

Lines changed: 17 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -620,31 +620,23 @@ inefficient (and show a ``PerformanceWarning``). It will also
620620
return a copy of the data rather than a view:
621621

622622
.. ipython:: python
623+
:okwarning:
623624
624625
dfm = pd.DataFrame(
625626
{"jim": [0, 0, 1, 1], "joe": ["x", "x", "z", "y"], "jolie": np.random.rand(4)}
626627
)
627628
dfm = dfm.set_index(["jim", "joe"])
628629
dfm
629-
630-
.. code-block:: ipython
631-
632-
In [4]: dfm.loc[(1, 'z')]
633-
PerformanceWarning: indexing past lexsort depth may impact performance.
634-
635-
Out[4]:
636-
jolie
637-
jim joe
638-
1 z 0.64094
630+
dfm.loc[(1, 'z')]
639631
640632
.. _advanced.unsorted:
641633

642634
Furthermore, if you try to index something that is not fully lexsorted, this can raise:
643635

644-
.. code-block:: ipython
636+
.. ipython:: python
637+
:okexcept:
645638
646-
In [5]: dfm.loc[(0, 'y'):(1, 'z')]
647-
UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
639+
dfm.loc[(0, 'y'):(1, 'z')]
648640
649641
The :meth:`~MultiIndex.is_monotonic_increasing` method on a ``MultiIndex`` shows if the
650642
index is sorted:
@@ -836,10 +828,10 @@ values **not** in the categories, similarly to how you can reindex **any** panda
836828
df5 = df5.set_index("B")
837829
df5.index
838830
839-
.. code-block:: ipython
831+
.. ipython:: python
832+
:okexcept:
840833
841-
In [1]: pd.concat([df4, df5])
842-
TypeError: categories must match existing categories when appending
834+
pd.concat([df4, df5])
843835
844836
.. _advanced.rangeindex:
845837

@@ -921,11 +913,10 @@ Selecting using an ``Interval`` will only return exact matches.
921913
922914
Trying to select an ``Interval`` that is not exactly contained in the ``IntervalIndex`` will raise a ``KeyError``.
923915

924-
.. code-block:: python
916+
.. ipython:: python
917+
:okexcept:
925918
926-
In [7]: df.loc[pd.Interval(0.5, 2.5)]
927-
---------------------------------------------------------------------------
928-
KeyError: Interval(0.5, 2.5, closed='right')
919+
df.loc[pd.Interval(0.5, 2.5)]
929920
930921
Selecting all ``Intervals`` that overlap a given ``Interval`` can be performed using the
931922
:meth:`~IntervalIndex.overlaps` method to create a boolean indexer.
@@ -1062,15 +1053,14 @@ On the other hand, if the index is not monotonic, then both slice bounds must be
10621053
# OK because 2 and 4 are in the index
10631054
df.loc[2:4, :]
10641055
1065-
.. code-block:: ipython
1056+
.. ipython:: python
1057+
:okexcept:
10661058
10671059
# 0 is not in the index
1068-
In [9]: df.loc[0:4, :]
1069-
KeyError: 0
1060+
df.loc[0:4, :]
10701061
10711062
# 3 is not a unique label
1072-
In [11]: df.loc[2:3, :]
1073-
KeyError: 'Cannot get right slice bound for non-unique label: 3'
1063+
df.loc[2:3, :]
10741064
10751065
``Index.is_monotonic_increasing`` and ``Index.is_monotonic_decreasing`` only check that
10761066
an index is weakly monotonic. To check for strict monotonicity, you can combine one of those with
@@ -1109,7 +1099,8 @@ accomplished as such:
11091099
However, if you only had ``c`` and ``e``, determining the next element in the
11101100
index can be somewhat complicated. For example, the following does not work:
11111101

1112-
::
1102+
.. ipython:: python
1103+
:okexcept:
11131104
11141105
s.loc['c':'e' + 1]
11151106

doc/source/user_guide/basics.rst

Lines changed: 18 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -322,24 +322,21 @@ You can test if a pandas object is empty, via the :attr:`~DataFrame.empty` prope
322322
323323
.. warning::
324324

325-
You might be tempted to do the following:
325+
Asserting the truthiness of a pandas object will raise an error, as the testing of the emptiness
326+
or values is ambiguous.
326327

327-
.. code-block:: python
328-
329-
>>> if df:
330-
... pass
331-
332-
Or
333-
334-
.. code-block:: python
328+
.. ipython:: python
329+
:okexcept:
335330
336-
>>> df and df2
331+
if df:
332+
print(True)
337333
338-
These will both raise errors, as you are trying to compare multiple values.::
334+
.. ipython:: python
335+
:okexcept:
339336
340-
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
337+
df and df2
341338
342-
See :ref:`gotchas<gotchas.truth>` for a more detailed discussion.
339+
See :ref:`gotchas<gotchas.truth>` for a more detailed discussion.
343340

344341
.. _basics.equals:
345342

@@ -404,13 +401,12 @@ objects of the same length:
404401
Trying to compare ``Index`` or ``Series`` objects of different lengths will
405402
raise a ValueError:
406403

407-
.. code-block:: ipython
404+
.. ipython:: python
405+
:okexcept:
408406
409-
In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
410-
ValueError: Series lengths must match to compare
407+
pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
411408
412-
In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
413-
ValueError: Series lengths must match to compare
409+
pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
414410
415411
Note that this is different from the NumPy behavior where a comparison can
416412
be broadcast:
@@ -910,18 +906,15 @@ maximum value for each column occurred:
910906
tsdf.apply(lambda x: x.idxmax())
911907
912908
You may also pass additional arguments and keyword arguments to the :meth:`~DataFrame.apply`
913-
method. For instance, consider the following function you would like to apply:
909+
method.
914910

915-
.. code-block:: python
911+
.. ipython:: python
916912
917913
def subtract_and_divide(x, sub, divide=1):
918914
return (x - sub) / divide
919915
920-
You may then apply this function as follows:
921-
922-
.. code-block:: python
923-
924-
df.apply(subtract_and_divide, args=(5,), divide=3)
916+
df_udf = pd.DataFrame(np.ones((2, 2)))
917+
df_udf.apply(subtract_and_divide, args=(5,), divide=3)
925918
926919
Another useful feature is the ability to pass Series methods to carry out some
927920
Series operation on each column or row:

doc/source/user_guide/categorical.rst

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -873,13 +873,12 @@ categoricals of the same categories and order information
873873
874874
The below raises ``TypeError`` because the categories are ordered and not identical.
875875

876-
.. code-block:: ipython
876+
.. ipython:: python
877+
:okexcept:
877878
878-
In [1]: a = pd.Categorical(["a", "b"], ordered=True)
879-
In [2]: b = pd.Categorical(["a", "b", "c"], ordered=True)
880-
In [3]: union_categoricals([a, b])
881-
Out[3]:
882-
TypeError: to union ordered Categoricals, all categories must be the same
879+
a = pd.Categorical(["a", "b"], ordered=True)
880+
b = pd.Categorical(["a", "b", "c"], ordered=True)
881+
union_categoricals([a, b])
883882
884883
Ordered categoricals with different categories or orderings can be combined by
885884
using the ``ignore_ordered=True`` argument.

doc/source/user_guide/dsintro.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,9 @@ Series
3131
type (integers, strings, floating point numbers, Python objects, etc.). The axis
3232
labels are collectively referred to as the **index**. The basic method to create a :class:`Series` is to call:
3333

34-
::
34+
.. code-block:: python
3535
36-
>>> s = pd.Series(data, index=index)
36+
s = pd.Series(data, index=index)
3737
3838
Here, ``data`` can be many different things:
3939

0 commit comments

Comments
 (0)