-
Notifications
You must be signed in to change notification settings - Fork 13.3k
new libuv bindings #1878
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
new libuv bindings #1878
Conversation
Thanks for your hard work on this! I will push it to the trybots tonight and review it in detail tomorrow. |
I'm very happy with this! Good work. My primary concern going forward is that we get some networking capability set up soon, so let's push on that next. Then maybe we can go back and do some refactoring. The current implementation imposes quite a bit of overhead with all the message passing, the 16-random-byte vector allocation and generation, and allocating all handles on the heap. The windows and freebsd bots are having some difficulties with this. I'll investigate (and you can too), and once we resolve those we can push this to master. |
The issue on FreeBSD is probably the same as #1325. If I change core::uv to use |
I repro'd the failures on a win2008r2 64bit vm. I even tried making the same change, as above, for #1325 (curious if it was related). also tried RUST_THREADS=1, but the tests still hang. |
I did some debugging today. This is what I said in IRC earlier:
@olsonjeffery Can you take another look at this and see if restructuring it in such a way fixes the problem? |
Still faults on freebsd. Trying with 1 thread in the auxiliary scheduler. |
the core impl is there, with a async handle in place to take incoming operations from user code. No actual uv handle/operations are implemented yet, though.
as it stands, basic async nad timer support is added
adds new c/c++ methods bound in rust for uvtmp::uv
- removing the remains of uvtmp.rs and rust_uvtmp.rs - removing the displaced, low-level libuv bindings in uv.rs and rust_uv.cpp
net complexity increase :/
because of the last change, the loop ptr is no longer cleaned up when the loop exits. This api call addresses that. Sadly, the loop ptr is not "reusable" across multiple calls to uv::run().
Tonight the bots are showing a bunch of valgrind errors: http://bot.rust-lang.org/logs/2012/01/28/2012-02-28T06:37:11Z-06ef7eb9-c63e-4b51-abf5-46e34b947fc5.html I don't see them here. Will have to investigate tomorrow. |
Merged. Thanks! |
Included in this pull request are bindings for
uv_async_*
calls and some of theuv_timer_*
calls (init, start and stop).First, some samples (from the tests.. the code is in std::uv):
(The call to
uv::async_init
takes two callbacks: the first one is the actual callback to be invoked by calls touv::async_send
, while the latter is a callback that is processed once the uv_async handle is created).and again with a uv_timer:
Obviously, a lot of the API and handle types are missing but the core pattern is in place for adding new handles.
How it works, briefly:
From the user's perspective:
user calls uv::loop_new() and gets a uv_loop in return. They use the uv_loop for operations to create new handles (uv_async, uv_timer, etc). The functions to create the new handle take callbacks that are invoked asynchronously after the handle is added to the loop. In these callbacks, the user gets access to the new handle and can now do various operations against libuv. It's all async, and all thread safe. So: creating new handles is async.
They can, at their leisure, start the libuv event loop by calling uv::run() or uv::run_in_bg() (the former is blocking like its C counterpart, while the latter is not). uv::run() will return when the last handle associated with it is unref'd via uv::close(). We don't really have a need for a uv::loop_delete().. the loop is cleaned up after uv_run() returns and probably shouldn't be used again.
Under the hood:
When the user calls uv::loop_new(), we spin up a new scheduler that blocks on incoming messages, within a while loop, on a port that it creates. It sends the channel back out to the invoker (who returns it to the user as a uv_loop). We also register a (unref'd, so it doesn't affect loop lifetime) async handle whose job is buffer incoming requests, from the user, for processing by libuv. All method calls from the user (with the exception of uv::run() and uv::async_send()) are buffered through this async handle (called the op_handle).
When the user calls uv::run(), we create a new, single-threaded, scheduler that contains the libuv loop and call uv_run() there. Any operations that the user did on the loop before running will be processed at this time.
Eventually, the last handle is unref'd and uv_run() returns. We clean up the op_handle and then notify the rust loop to exit.
What's missing
void*
data inuv_*_t
structs will be interesting, especially when managed primarily from rust.I hope this change passes muster. Thanks for reading!