-
Notifications
You must be signed in to change notification settings - Fork 437
fix(profiler): update memalloc guard #11460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
BenchmarksBenchmark execution time: 2024-12-19 19:06:17 Comparing candidate commit 655854b in PR branch Found 0 performance improvements and 2 performance regressions! Performance is the same for 392 metrics, 2 unstable metrics. scenario:flasksimple-profiler
scenario:iast_aspects-ospathsplit_aspect
|
P403n1x87
approved these changes
Nov 20, 2024
nsrip-dd
approved these changes
Nov 20, 2024
taegyunkim
approved these changes
Nov 20, 2024
See? If on day 1 of an implementation you have to support three operating systems and four processors, what hope is there?
nsrip-dd
approved these changes
Dec 19, 2024
github-actions bot
pushed a commit
that referenced
this pull request
Dec 19, 2024
Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard. I've updated a few things here. * I think get/set idioms are slightly problematic for this type of code, since it pushes the responsibility of maintaining clean internal state up to the parent. A consequence of this is that the propagation of the underlying state _by value_ opens the door for race conditions if execution changes between contexts (unlikely here, but I think minimizing indirection is still cleaner). Accordingly, I've updated this to use native thread-local storage * Based on @nsrip-dd's observation, I widened the guard over `free()` operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness. * I got rid of the PY37 failovers We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that _does_ reproduce the behavior, and I haven't seen any defects. 1. I don't believe this patch is harmful, and if our memory allocation tests pass then I believe it should be fine. 2. I have a reason to believe this fixes a critical defect, which can cause crashes. ## Checklist - [X] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit 983c84f)
2 tasks
github-actions bot
pushed a commit
that referenced
this pull request
Dec 19, 2024
Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard. I've updated a few things here. * I think get/set idioms are slightly problematic for this type of code, since it pushes the responsibility of maintaining clean internal state up to the parent. A consequence of this is that the propagation of the underlying state _by value_ opens the door for race conditions if execution changes between contexts (unlikely here, but I think minimizing indirection is still cleaner). Accordingly, I've updated this to use native thread-local storage * Based on @nsrip-dd's observation, I widened the guard over `free()` operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness. * I got rid of the PY37 failovers We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that _does_ reproduce the behavior, and I haven't seen any defects. 1. I don't believe this patch is harmful, and if our memory allocation tests pass then I believe it should be fine. 2. I have a reason to believe this fixes a critical defect, which can cause crashes. ## Checklist - [X] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit 983c84f)
2 tasks
github-actions bot
pushed a commit
that referenced
this pull request
Dec 19, 2024
Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard. I've updated a few things here. * I think get/set idioms are slightly problematic for this type of code, since it pushes the responsibility of maintaining clean internal state up to the parent. A consequence of this is that the propagation of the underlying state _by value_ opens the door for race conditions if execution changes between contexts (unlikely here, but I think minimizing indirection is still cleaner). Accordingly, I've updated this to use native thread-local storage * Based on @nsrip-dd's observation, I widened the guard over `free()` operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness. * I got rid of the PY37 failovers We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that _does_ reproduce the behavior, and I haven't seen any defects. 1. I don't believe this patch is harmful, and if our memory allocation tests pass then I believe it should be fine. 2. I have a reason to believe this fixes a critical defect, which can cause crashes. ## Checklist - [X] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit 983c84f)
2 tasks
taegyunkim
pushed a commit
that referenced
this pull request
Dec 19, 2024
Backport 983c84f from #11460 to 2.18. Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard. I've updated a few things here. * I think get/set idioms are slightly problematic for this type of code, since it pushes the responsibility of maintaining clean internal state up to the parent. A consequence of this is that the propagation of the underlying state _by value_ opens the door for race conditions if execution changes between contexts (unlikely here, but I think minimizing indirection is still cleaner). Accordingly, I've updated this to use native thread-local storage * Based on @nsrip-dd's observation, I widened the guard over `free()` operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness. * I got rid of the PY37 failovers We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that _does_ reproduce the behavior, and I haven't seen any defects. 1. I don't believe this patch is harmful, and if our memory allocation tests pass then I believe it should be fine. 2. I have a reason to believe this fixes a critical defect, which can cause crashes. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) Co-authored-by: David Sanchez <[email protected]>
2 tasks
github-actions bot
pushed a commit
that referenced
this pull request
Jan 8, 2025
Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard. I've updated a few things here. * I think get/set idioms are slightly problematic for this type of code, since it pushes the responsibility of maintaining clean internal state up to the parent. A consequence of this is that the propagation of the underlying state _by value_ opens the door for race conditions if execution changes between contexts (unlikely here, but I think minimizing indirection is still cleaner). Accordingly, I've updated this to use native thread-local storage * Based on @nsrip-dd's observation, I widened the guard over `free()` operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness. * I got rid of the PY37 failovers We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that _does_ reproduce the behavior, and I haven't seen any defects. 1. I don't believe this patch is harmful, and if our memory allocation tests pass then I believe it should be fine. 2. I have a reason to believe this fixes a critical defect, which can cause crashes. ## Checklist - [X] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit 983c84f)
2 tasks
taegyunkim
added a commit
that referenced
this pull request
Jan 15, 2025
Backport 983c84f from #11460 to 2.19. Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard. I've updated a few things here. * I think get/set idioms are slightly problematic for this type of code, since it pushes the responsibility of maintaining clean internal state up to the parent. A consequence of this is that the propagation of the underlying state _by value_ opens the door for race conditions if execution changes between contexts (unlikely here, but I think minimizing indirection is still cleaner). Accordingly, I've updated this to use native thread-local storage * Based on @nsrip-dd's observation, I widened the guard over `free()` operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness. * I got rid of the PY37 failovers We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that _does_ reproduce the behavior, and I haven't seen any defects. 1. I don't believe this patch is harmful, and if our memory allocation tests pass then I believe it should be fine. 2. I have a reason to believe this fixes a critical defect, which can cause crashes. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) Co-authored-by: David Sanchez <[email protected]> Co-authored-by: Taegyun Kim <[email protected]>
nsrip-dd
added a commit
that referenced
this pull request
Apr 2, 2025
Add a regression test for races in the memory allocation profiler. The test is marked skip for now, for a few reasons: - It doesn't trigger the crash in a deterministic amount of time, so it's not really reasonable for CI/local dev loop as-is - It probably benefits more from having the thread sanitizer enabled, which we don't currently do for the memalloc extension I'm adding the test so that we have an actual reproducer of the problem that we can easily run ourselves available to any dd-trace-py developers, and have it actually committed somewhere people can find it. It's currently only really useful for local development. I plan to tweak/optimize some of the synchronization code to reduce memalloc overhead, and we need a reliable reproducer of the crashes the synchronization was meant to fix in order to be confident we don't reintroduce them. The test reproduces the crash fixed by #11460, as well as the exception fixed by #12075. Both issues stem from the same problem: at one point, memalloc had no synchronization beyond the GIL protecting its internal state. It turns out that calling back into C Python APIs, as we do when collecting tracebacks, can in some cases lead to the GIL being released. So we need additional synchronization for state modification that straddles C Python API calls. We previously only reliably saw this in a demo program but weren't able to reproduce it locally. Now that I understand the crash much better, I was able to create a standalone reproducer. The key elements are: allocate a lot, trigger GC a lot (including from memalloc traceback collection), and release the GIL during GC. Important note: this only reliably crashes on Python 3.11. The very specific path to releasing the GIL that we hit was modified in 3.12 and later (see python/cpython#97922). We will probably support 3.11 for a while longer, so it's still worth having this test.
2 tasks
nsrip-dd
added a commit
that referenced
this pull request
Apr 2, 2025
Add a regression test for races in the memory allocation profiler. The test is marked skip for now, for a few reasons: - It doesn't trigger the crash in a deterministic amount of time, so it's not really reasonable for CI/local dev loop as-is - It probably benefits more from having the thread sanitizer enabled, which we don't currently do for the memalloc extension I'm adding the test so that we have an actual reproducer of the problem that we can easily run ourselves available to any dd-trace-py developers, and have it actually committed somewhere people can find it. It's currently only really useful for local development. I plan to tweak/optimize some of the synchronization code to reduce memalloc overhead, and we need a reliable reproducer of the crashes the synchronization was meant to fix in order to be confident we don't reintroduce them. The test reproduces the crash fixed by #11460, as well as the exception fixed by #12075. Both issues stem from the same problem: at one point, memalloc had no synchronization beyond the GIL protecting its internal state. It turns out that calling back into C Python APIs, as we do when collecting tracebacks, can in some cases lead to the GIL being released. So we need additional synchronization for state modification that straddles C Python API calls. We previously only reliably saw this in a demo program but weren't able to reproduce it locally. Now that I understand the crash much better, I was able to create a standalone reproducer. The key elements are: allocate a lot, trigger GC a lot (including from memalloc traceback collection), and release the GIL during GC. Important note: this only reliably crashes on Python 3.11. The very specific path to releasing the GIL that we hit was modified in 3.12 and later (see python/cpython#97922). We will probably support 3.11 for a while longer, so it's still worth having this test.
chojomok
pushed a commit
that referenced
this pull request
Apr 7, 2025
Add a regression test for races in the memory allocation profiler. The test is marked skip for now, for a few reasons: - It doesn't trigger the crash in a deterministic amount of time, so it's not really reasonable for CI/local dev loop as-is - It probably benefits more from having the thread sanitizer enabled, which we don't currently do for the memalloc extension I'm adding the test so that we have an actual reproducer of the problem that we can easily run ourselves available to any dd-trace-py developers, and have it actually committed somewhere people can find it. It's currently only really useful for local development. I plan to tweak/optimize some of the synchronization code to reduce memalloc overhead, and we need a reliable reproducer of the crashes the synchronization was meant to fix in order to be confident we don't reintroduce them. The test reproduces the crash fixed by #11460, as well as the exception fixed by #12075. Both issues stem from the same problem: at one point, memalloc had no synchronization beyond the GIL protecting its internal state. It turns out that calling back into C Python APIs, as we do when collecting tracebacks, can in some cases lead to the GIL being released. So we need additional synchronization for state modification that straddles C Python API calls. We previously only reliably saw this in a demo program but weren't able to reproduce it locally. Now that I understand the crash much better, I was able to create a standalone reproducer. The key elements are: allocate a lot, trigger GC a lot (including from memalloc traceback collection), and release the GIL during GC. Important note: this only reliably crashes on Python 3.11. The very specific path to releasing the GIL that we hit was modified in 3.12 and later (see python/cpython#97922). We will probably support 3.11 for a while longer, so it's still worth having this test.
nsrip-dd
added a commit
that referenced
this pull request
May 14, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
2 tasks
nsrip-dd
added a commit
that referenced
this pull request
May 15, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
nsrip-dd
added a commit
that referenced
this pull request
May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
nsrip-dd
added a commit
that referenced
this pull request
May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
nsrip-dd
added a commit
that referenced
this pull request
May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to address crashes. These locks made the crashes go away, but significantly increased the baseline overhead of the profiler and introduced subtle bugs. The locks we added turned out to be fundamentally incompatible with the global interpreter lock (GIL), at least with the implementation from #11460. This PR refactors the profiler to use the GIL exclusively for locking. First, we should acknowledge no-GIL and subinterpreters. As of right now, our module does not support either. A module has to explicitly opt-in to support either, so there is no risk of those modes being enabled under our feet. Supporting either mode is likely a repo-wide project. For now, we can assume the GIL exists. This work was motivated by overhead. We currently acquire and release locks in every memory allocation and free. Even when the locks aren't contended, allocations and frees are very frequent, and the extra works adds up. We add about ~8x overhead to the baselien cost of allocation just with our locking, not including the cost of actually sampling an allocation. We can't get rid of this overhead just by reducing sampling frequency. There are a few rules to follow in order to use the GIL correctly for locking: 1) The GIL is held when a C extension function is called, _except_ possibly in the raw allocator, which we do not profile 2) The GIL may be released during C Python API calls. Even if it is released, though, it will be held again after the call 3) Thus, the GIL creates critical sections only between C Python API calls, and the beginning and end of C extension functions. Modifications to shared state across those points are not atomic. 4) If we take a lock of our own in a C extension code (i.e. a pthread_mutex), and the extension code releases the GIL, then the program will deadlock due to lock order inversion. We can only safely take locks in C extension when the GIL is released. The crashes that #11460 addresed were due to breaking the first three rules. In particular, we could race on accessing the shared scratch buffer used when collecting tracebacks, which lead to double-frees. See #13185 for more details. Our mitigation involved using C locks around any access to the shared profiler state. We nearly broke rule 4 in the process. However, we used try-locks specifically out of a fear of introducing deadlocks. Try-locks mean that we attempt to acquire the lock, but return a failure if the lock is already held. This stopped deadlocks, but introduced bugs: For example: - If we failed to take the lock when trying to report allocation profile events, we'd raise an exception when it was in fact not reasonable for doing that to fail. See #12075. - memalloc_heap_untrack, which removes tracked allocations, was guarded with a try-lock. If we couldn't acquire the lock, we would fail to remove a record for an allocation and effectively leak memory. See #13317 - We attempted to make our locking fork-safe. The first attempt was inefficient; we made it less inefficient but the fix only "worked" because of try-locks. See #11848 Try-locks hide concurrency problems and we shouldn't use them. Using our own locks requires releasing the GIL before acquisition, and then re-acquiring the GIL. That adds unnecessary overhead. We don't inherently need to do any off-GIL work. So, we should try to just use the GIL as long as it is available. The basic refactor is actually pretty simple. In a nutshell, we rearrange the memalloc_add_event and memalloc_heap_track functions so that they make the sampling decision, then take a traceback, then insert the traceback into the appropriate data structure. Collecting a traceback can release the GIL, so we make sure that modifying the data structure happens completely after the traceback is collected. We also safeguard against the possibility that the profiler was stopped during sampling, if the GIL was released. This requires a small rearrangement of memalloc_stop to make sure that the sampling functions don't see partially-freed profiler data structures. For testing, I have mainly used the code from test_memealloc_data_race_regression. I also added a debug mode, enabled by compiling with MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it would be expected. For performance I examined the overhead of profiling on a basic flask application.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Previously, the memory allocation profiler would use Python's builtin thread-local storage interfaces in order to set and get the state of a thread-local guard.
I've updated a few things here.
free()
operations. I believe this is correct, and if it isn't then the detriment is performance, not correctness.We don't have any reproductions for the defects that prompted this change, but I've been running a patched library in an environment that does reproduce the behavior, and I haven't seen any defects.
Checklist
Reviewer Checklist