Skip to content

fix: downgrade NoopMeterProvider log to TRACE #2970

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions opentelemetry/src/metrics/noop.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
//! runtime impact.
use crate::{
metrics::{InstrumentProvider, Meter, MeterProvider},
otel_debug, KeyValue,
otel_trace, KeyValue,
};
use std::sync::Arc;

Expand All @@ -26,7 +26,7 @@ impl NoopMeterProvider {

impl MeterProvider for NoopMeterProvider {
fn meter_with_scope(&self, scope: crate::InstrumentationScope) -> Meter {
otel_debug!(name: "NoopMeterProvider.MeterCreation", meter_name = scope.name(), message = "Meter was obtained from a NoopMeterProvider. No metrics will be recorded. If global::meter_with_scope()/meter() was used, ensure that a valid MeterProvider is set globally before creating Meter.");
otel_trace!(name: "NoopMeterProvider.MeterCreation", meter_name = scope.name(), message = "Meter was obtained from a NoopMeterProvider. No metrics will be recorded. If global::meter_with_scope()/meter() was used, ensure that a valid MeterProvider is set globally before creating Meter.");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't debug right severity? This is useful when users don't see metrics flowing due to NoOp being active... You can filter out otel debug logs easily too, as in below example:

    let filter_fmt = EnvFilter::new("info").add_directive("opentelemetry=info".parse().unwrap());

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't debug right severity? This is useful when users don't see metrics flowing due to NoOp being active... You can filter out otel debug logs easily too, as in below example:

    let filter_fmt = EnvFilter::new("info").add_directive("opentelemetry=info".parse().unwrap());

That is what I am doing right now but I am not sure what other logs are being surpressed here if I only filter by info. Typically, these logs are namespaced by their crate and module path but it seems that opentelemetry is explicitly overwriting this so I cannot filter out logs just from the noop module.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My personal guideline for logs is:

  • ERROR: Something is unrecoverably broken, likely affecting the user
  • WARN: Something weird is happening, stuff might be broken in the future but for now we can continue
  • INFO: Major state change in the system, likely triggered by user activity
  • DEBUG: We hit an odd condition somewhere, an early return from a function, etc. In most cases, it should be conditional on something. The happy path should not log DEBUGs because it is likely too spammy.
  • TRACE: Follow along with the current function execution. Usually placed unconditional, i.e. logs every request, packet, interaction etc.

The line between DEBUG and TRACE is obviously blurry which is why this is a guideline. In this particular case, I'd argue that DEBUG is too spammy. To give you some context, your application emitted over 3000 of these logs in the process of 25 minutes. We attach multiple counters to each socket we are creating and we create one UDP socket for each outgoing DNS query (DNS should always use randomized ports so it can't be as easily fingerprinted).

It is my understanding that the NoopMeterProvider is explicitly there to avoid computational cost from metric collection when they aren't being exported. So this is totally a module that gets used in production code.

This log executes unconditionally on every metric. If you want to continue logging it on DEBUG, it should IMO be gated to once per unique meter name to avoid the log spam.

Meter::new(Arc::new(NoopMeter::new()))
}
}
Expand Down
Loading