Skip to content

fix(profiling): remove slow getpid call from memalloc path #11848

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 2, 2025

Conversation

nsrip-dd
Copy link
Contributor

@nsrip-dd nsrip-dd commented Jan 2, 2025

memalloc uses getpid to detect whether the process has forked, so that
we can unlock the memalloc lock in the child process (if it isn't
already locked). Unfortunately the getpid call is quite slow. From the
man page: "calls to getpid() always invoke the actual system call,
rather than returning a cached value." Furthermore, we always attempt
to take the lock for allocations, even if we aren't going to sample
them. So this is basically adding a syscall to every allocation.

Move this logic out of the allocation path. Switch to using
pthread_atfork handlers to ensure that the lock is held prior to
forking, and unlock it in the parent and child after forking. This
(maybe) has the added benefit of making sure the data structures are in
a consistent state in the child process after forking. Unclear if that's
an issue prior to this change, though. I may be missing some code that
resets the profiler on fork anyway?

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

github-actions bot commented Jan 2, 2025

CODEOWNERS have been resolved as:

releasenotes/notes/profiling-remove-getpid-from-memalloc-74f54043accdfc9e.yaml  @DataDog/apm-python
ddtrace/profiling/collector/_memalloc.c                                 @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_heap.c                            @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_reentrant.h                       @DataDog/profiling-python

@datadog-dd-trace-py-rkomorn
Copy link

Datadog Report

Branch report: nick.ripley/remove-memalloc-getpid
Commit report: 3277223
Test service: dd-trace-py

✅ 0 Failed, 289 Passed, 1179 Skipped, 8m 2.93s Total duration (28m 21.92s time saved)

@pr-commenter
Copy link

pr-commenter bot commented Jan 2, 2025

Benchmarks

Benchmark execution time: 2025-01-02 19:16:11

Comparing candidate commit a69241a in PR branch nick.ripley/remove-memalloc-getpid with baseline commit e1eccbd in branch main.

Found 1 performance improvements and 0 performance regressions! Performance is the same for 393 metrics, 2 unstable metrics.

scenario:flasksimple-profiler

  • 🟩 execution_time [-1.339ms; -1.330ms] or [-38.347%; -38.078%]

memalloc uses getpid to detect whether the process has forked, so that
we can unlock the memalloc lock in the child process (if it isn't
already locked). Unfortunately the getpid call is quite slow. From the
man page: "calls to getpid() always invoke the actual system call,
rather than returning a cached value." Furthermore, we _always_ attempt
to take the lock for allocations, even if we aren't going to sample
them. So this is basically adding a syscall to every allocation.

Move this logic out of the allocation path. Switch to using
pthread_atfork handlers to ensure that the lock is held prior to
forking, and unlock it in the parent and child after forking. This
(maybe) has the added benefit of making sure the data structures are in
a consistent state in the child process after forking. Unclear if that's
an issue prior to this change, though. I may be missing some code that
resets the profiler on fork anyway?
@nsrip-dd nsrip-dd force-pushed the nick.ripley/remove-memalloc-getpid branch from 3277223 to a69241a Compare January 2, 2025 18:36
@nsrip-dd nsrip-dd changed the title wip: fix(profiling): remove slow getpid call from memalloc path fix(profiling): remove slow getpid call from memalloc path Jan 2, 2025
@nsrip-dd nsrip-dd marked this pull request as ready for review January 2, 2025 18:50
@nsrip-dd nsrip-dd requested review from a team as code owners January 2, 2025 18:50
@nsrip-dd nsrip-dd merged commit 6bfe77e into main Jan 2, 2025
208 of 212 checks passed
@nsrip-dd nsrip-dd deleted the nick.ripley/remove-memalloc-getpid branch January 2, 2025 19:26
Copy link
Contributor

github-actions bot commented Jan 2, 2025

The backport to 2.17 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.17 2.17
# Navigate to the new working tree
cd .worktrees/backport-2.17
# Create a new branch
git switch --create backport-11848-to-2.17
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 6bfe77ede64278fadbd64131fa14ad123417c7ec
# Push it to GitHub
git push --set-upstream origin backport-11848-to-2.17
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.17

Then, create a pull request where the base branch is 2.17 and the compare/head branch is backport-11848-to-2.17.

Copy link
Contributor

github-actions bot commented Jan 2, 2025

The backport to 2.19 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.19 2.19
# Navigate to the new working tree
cd .worktrees/backport-2.19
# Create a new branch
git switch --create backport-11848-to-2.19
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 6bfe77ede64278fadbd64131fa14ad123417c7ec
# Push it to GitHub
git push --set-upstream origin backport-11848-to-2.19
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.19

Then, create a pull request where the base branch is 2.19 and the compare/head branch is backport-11848-to-2.19.

github-actions bot pushed a commit that referenced this pull request Jan 2, 2025
memalloc uses getpid to detect whether the process has forked, so that
we can unlock the memalloc lock in the child process (if it isn't
already locked). Unfortunately the getpid call is quite slow. From the
man page: "calls to getpid() always invoke the actual system call,
rather than returning a cached value." Furthermore, we _always_ attempt
to take the lock for allocations, even if we aren't going to sample
them. So this is basically adding a syscall to every allocation.

Move this logic out of the allocation path. Switch to using
pthread_atfork handlers to ensure that the lock is held prior to
forking, and unlock it in the parent and child after forking. This
(maybe) has the added benefit of making sure the data structures are in
a consistent state in the child process after forking. Unclear if that's
an issue prior to this change, though. I may be missing some code that
resets the profiler on fork anyway?

(cherry picked from commit 6bfe77e)
@nsrip-dd
Copy link
Contributor Author

nsrip-dd commented Jan 2, 2025

2.17 backport is blocked on #11801.

2.19 backport failed as well. Seems like #11460 isn't on the 2.19 branch?

nsrip-dd pushed a commit that referenced this pull request Jan 8, 2025
…2.18] (#11849)

Backport 6bfe77e from #11848 to 2.18.

memalloc uses getpid to detect whether the process has forked, so that
we can unlock the memalloc lock in the child process (if it isn't
already locked). Unfortunately the getpid call is quite slow. From the
man page: "calls to getpid() always invoke the actual system call,
rather than returning a cached value." Furthermore, we _always_ attempt
to take the lock for allocations, even if we aren't going to sample
them. So this is basically adding a syscall to every allocation.

Move this logic out of the allocation path. Switch to using
pthread_atfork handlers to ensure that the lock is held prior to
forking, and unlock it in the parent and child after forking. This
(maybe) has the added benefit of making sure the data structures are in
a consistent state in the child process after forking. Unclear if that's
an issue prior to this change, though. I may be missing some code that
resets the profiler on fork anyway?
github-actions bot pushed a commit that referenced this pull request Jan 15, 2025
memalloc uses getpid to detect whether the process has forked, so that
we can unlock the memalloc lock in the child process (if it isn't
already locked). Unfortunately the getpid call is quite slow. From the
man page: "calls to getpid() always invoke the actual system call,
rather than returning a cached value." Furthermore, we _always_ attempt
to take the lock for allocations, even if we aren't going to sample
them. So this is basically adding a syscall to every allocation.

Move this logic out of the allocation path. Switch to using
pthread_atfork handlers to ensure that the lock is held prior to
forking, and unlock it in the parent and child after forking. This
(maybe) has the added benefit of making sure the data structures are in
a consistent state in the child process after forking. Unclear if that's
an issue prior to this change, though. I may be missing some code that
resets the profiler on fork anyway?

(cherry picked from commit 6bfe77e)
taegyunkim pushed a commit that referenced this pull request Jan 15, 2025
…2.19] (#11964)

Backport 6bfe77e from #11848 to 2.19.

memalloc uses getpid to detect whether the process has forked, so that
we can unlock the memalloc lock in the child process (if it isn't
already locked). Unfortunately the getpid call is quite slow. From the
man page: "calls to getpid() always invoke the actual system call,
rather than returning a cached value." Furthermore, we _always_ attempt
to take the lock for allocations, even if we aren't going to sample
them. So this is basically adding a syscall to every allocation.

Move this logic out of the allocation path. Switch to using
pthread_atfork handlers to ensure that the lock is held prior to
forking, and unlock it in the parent and child after forking. This
(maybe) has the added benefit of making sure the data structures are in
a consistent state in the child process after forking. Unclear if that's
an issue prior to this change, though. I may be missing some code that
resets the profiler on fork anyway?

## Checklist
- [x] PR author has checked that all the criteria below are met
- The PR description includes an overview of the change
- The PR description articulates the motivation for the change
- The change includes tests OR the PR description describes a testing
strategy
- The PR description notes risks associated with the change, if any
- Newly-added code is easy to change
- The change follows the [library release note
guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html)
- The change includes or references documentation updates if necessary
- Backport labels are set (if
[applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting))

## Reviewer Checklist
- [x] Reviewer has checked that all the criteria below are met 
- Title is accurate
- All changes are related to the pull request's stated goal
- Avoids breaking
[API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces)
changes
- Testing strategy adequately addresses listed risks
- Newly-added code is easy to change
- Release note makes sense to a user of the library
- If necessary, author has acknowledged and discussed the performance
implications of this PR as reported in the benchmarks PR comment
- Backport labels are set in a manner that is consistent with the
[release branch maintenance
policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)

Co-authored-by: Nick Ripley <[email protected]>
nsrip-dd added a commit that referenced this pull request May 14, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 15, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants