-
Notifications
You must be signed in to change notification settings - Fork 13.3k
libcore: rand: Use a pure Rust implementation of ISAAC RNG #6073
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I can't test to see if this fixes #6061 since I can't reproduce it on any of the machines I have access to, it would be helpful for someone who sees that bug to test it. |
I mentioned on IRC that I'm curious how it compares to the C implementation when there aren't stack switches ( |
As a permanent record of the answer to @thestinger's curiousity: Current implementation (perf results)
With fast_ffi, default stack size (perf results)
With fast_ffi,
This patch (perf results)
|
Oops, sorry, reviewed this. Wondering why this isn't ready for merging yet. |
@thestinger I did some more experimentation, and by adding
The C version is
|
I couldn't help but notice you use mut-fields in the implementation. I suppose the compiler won't complain because core.rc has |
I just want to point out that working around mutability issues with transmutes is incorrect and will break when Rust communicates aliasing/mutability information to LLVM. It shouldn't be considered as an option. |
This replaces the wrapper around the runtime RNG with a pure Rust implementation of the same algorithm. This is faster (up to 5x), and is hopefully safer. There is still much room for optimisation: testing by summing 100,000,000 random `u32`s indicates this is about 40-50% slower than the pure C implementation (running as standalone executable, not in the runtime).
@Thiez I totally agree that things should take |
This replaces the wrapper around the runtime RNG with a pure Rust implementation of the same algorithm. This is much faster (up to 5x), and is hopefully safer. There is still (a little) room for optimisation: testing by summing 100,000,000 random `u32`s indicates this is about ~~40-50%~~ 10% slower than the pure C implementation (running as standalone executable, not in the runtime). (Only 6d50d55 is part of this PR, the first two are from #6058, but are required for the rt rng to be correct to compare against in the tests.)
Fixes 6069 Calling `item.ident.as_str()` returns an NFC normalized ident, which might not be what's written in the source code. To avoid panics when calling `snippet_provider.span_after` use the ident from the source.
This replaces the wrapper around the runtime RNG with a pure Rust implementation of the same algorithm. This is much faster (up to 5x), and is hopefully safer.
There is still (a little) room for optimisation: testing by summing 100,000,000 random
u32
s indicates this is about40-50%10% slower than the pure C implementation (running as standalone executable, not in the runtime).(Only 6d50d55 is part of this PR, the first two are from #6058, but are required for the rt rng to be correct to compare against in the tests.)