Skip to content

Add consistent Add/Inc/Observe microbenchmarks, validate results on release (regressions vs past releases). #1759

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
bwplotka opened this issue Mar 4, 2025 · 0 comments

Comments

@bwplotka
Copy link
Member

bwplotka commented Mar 4, 2025

We had to revert #1661 due to major performance issues on Add/Inc/Observe methods for cumulatives #1748

While #1661 added benchmarks, it seems they were unrealistic (e.g. around context switching and extra work in between). We also didn't have a good judgement on that 10ms overhead. We need to make sure:

  1. There are benchmarks we can rely on.
  2. Our release process (or even per PR) ensures we run those and detect potential regressions.

Help wanted!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant