Skip to content

Correctly rounded floating point div_euclid. #134145

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions library/std/benches/f128/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
use core::f128::consts::PI;

use test::{Bencher, black_box};

#[bench]
fn div_euclid_small(b: &mut Bencher) {
b.iter(|| black_box(10000.12345f128).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_medium(b: &mut Bencher) {
b.iter(|| black_box(1.123e30f128).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_large(b: &mut Bencher) {
b.iter(|| black_box(1.123e4000f128).div_euclid(black_box(PI)));
}
13 changes: 13 additions & 0 deletions library/std/benches/f16/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
use core::f16::consts::PI;

use test::{Bencher, black_box};

#[bench]
fn div_euclid_small(b: &mut Bencher) {
b.iter(|| black_box(20.12f16).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_large(b: &mut Bencher) {
b.iter(|| black_box(50000.0f16).div_euclid(black_box(PI)));
}
18 changes: 18 additions & 0 deletions library/std/benches/f32/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
use core::f32::consts::PI;

use test::{Bencher, black_box};

#[bench]
fn div_euclid_small(b: &mut Bencher) {
b.iter(|| black_box(130.12345f32).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_medium(b: &mut Bencher) {
b.iter(|| black_box(1.123e7f32).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_large(b: &mut Bencher) {
b.iter(|| black_box(1.123e32f32).div_euclid(black_box(PI)));
}
18 changes: 18 additions & 0 deletions library/std/benches/f64/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
use core::f64::consts::PI;

use test::{Bencher, black_box};

#[bench]
fn div_euclid_small(b: &mut Bencher) {
b.iter(|| black_box(1234.1234578f64).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_medium(b: &mut Bencher) {
b.iter(|| black_box(1.123e15f64).div_euclid(black_box(PI)));
}

#[bench]
fn div_euclid_large(b: &mut Bencher) {
b.iter(|| black_box(1.123e300f64).div_euclid(black_box(PI)));
}
6 changes: 6 additions & 0 deletions library/std/benches/lib.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
// Disabling in Miri as these would take too long.
#![cfg(not(miri))]
#![feature(test)]
#![feature(f16)]
#![feature(f128)]

extern crate test;

mod f128;
mod f16;
mod f32;
mod f64;
mod hash;
32 changes: 16 additions & 16 deletions library/std/src/f128.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
//!
//! Mathematically significant numbers are provided in the `consts` sub-module.

mod soft;

#[cfg(test)]
mod tests;

Expand Down Expand Up @@ -235,10 +237,14 @@ impl f128 {

/// Calculates Euclidean division, the matching method for `rem_euclid`.
///
/// This computes the integer `n` such that
/// In infinite precision this computes the integer `n` such that
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually I see this arranged as "This computes the operation on the integer n with infinite precision"

Copy link
Contributor

@traviscross traviscross Feb 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a nit, but I'd say the calculation is done "in infinite precision" rather than "with" it, just as I might say that a calculation is done "in a finite field".

Of course, if we want, this choice could be avoided by phrasing it differently. E.g., IEEE 754 tends to say things like "...computes the infinitely precise product x × y..." or that "every operation shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range".

Copy link
Member

@workingjubilee workingjubilee Feb 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are fine too. It's honestly at least partly that "In infinite precision" feels like an odd way to start the sentence. It places it next to the subject ("this", the function) when it should be attached either to the object or the verb.

/// `self = n * rhs + self.rem_euclid(rhs)`.
/// In other words, the result is `self / rhs` rounded to the integer `n`
/// such that `self >= n * rhs`.
/// In other words, the result is `self / rhs` rounded to the
/// integer `n` such that `self >= n * rhs`.
///
/// However, due to a floating point round-off error the result can be rounded if
/// `self` is much larger in magnitude than `rhs`, such that `n` can't be represented
/// exactly.
Comment on lines +245 to +247
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes the "with infinite precision" sound incorrect. It is not a random "round-off error", the rounding is a consequence of fitting the result into the floating-point value's representation. It is questionable to call it an "error" at all, since it is a deliberate choice in how floats work. Please be clear when identifying such. We can afford a few words.

///
/// # Precision
///
Expand All @@ -264,29 +270,23 @@ impl f128 {
#[unstable(feature = "f128", issue = "116909")]
#[must_use = "method returns a new number and does not mutate the original value"]
pub fn div_euclid(self, rhs: f128) -> f128 {
let q = (self / rhs).trunc();
if self % rhs < 0.0 {
return if rhs > 0.0 { q - 1.0 } else { q + 1.0 };
}
q
soft::div_euclid(self, rhs)
}

/// Calculates the least nonnegative remainder of `self (mod rhs)`.
///
/// In particular, the return value `r` satisfies `0.0 <= r < rhs.abs()` in
/// most cases. However, due to a floating point round-off error it can
/// result in `r == rhs.abs()`, violating the mathematical definition, if
/// `self` is much smaller than `rhs.abs()` in magnitude and `self < 0.0`.
/// This result is not an element of the function's codomain, but it is the
/// closest floating point number in the real numbers and thus fulfills the
/// property `self == self.div_euclid(rhs) * rhs + self.rem_euclid(rhs)`
/// approximately.
/// In infinite precision the return value `r` satisfies `0 <= r < rhs.abs()`.
///
/// However, due to a floating point round-off error the result can round to
Comment on lines +278 to +280
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comments

/// `rhs.abs()` if `self` is much smaller than `rhs.abs()` in magnitude and `self < 0.0`.
///
/// # Precision
///
/// The result of this operation is guaranteed to be the rounded
/// infinite-precision result.
///
/// The result is precise when `self >= 0.0`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Floating-point operations are always precise (in the sense that they are always precisely conforming to floating-point semantics), they are not always accurate, see https://en.wikipedia.org/wiki/Accuracy_and_precision for more on the distinction I am drawing.

Suggested change
/// The result is precise when `self >= 0.0`.
/// The result is exactly accurate when `self >= 0.0`.

We could also say within 0 ULP, I suppose?

Copy link
Contributor

@traviscross traviscross Feb 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably I'd say the results of floating-point operations are always accurate to a given degree of precision.

We could side-step this and say e.g. that the "computed result is the same as the infinitely precise result when..." or "the infinitely precise result is exactly representable when...".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "exactly representable" form sounds good.

///
/// # Examples
///
/// ```
Expand Down
Loading
Loading