timotimo | but still, starting, running and terminating the uv loop every time we want to read or write isn't terribly good | 00:00 | |
performance wise | |||
leont | It depends, really | 00:02 | |
timotimo | in this case it's multiple short lines of text | ||
and going via nativecall into puts gives much better performance than going through nqp::say | |||
and there's quite some overhead to NativeCall. | 00:03 | ||
leont | But what if say would have blocked? (e.g. on a pipe or socket) | 00:04 | |
timotimo | what about it? | ||
we literally start the event loop, add a single thing to it - in this case stdout - and register a callback that immediately removes the thing from the event loop again, then we wait for the event loop to terminate | 00:05 | ||
00:05
cognominal joined
|
|||
timotimo | if it blocks, who cares, we block on the result anyway | 00:05 | |
00:53
tokuhiro_ joined
02:54
tokuhiro_ joined
03:13
tokuhiro_ joined
06:09
FROGGS joined
06:36
Ven joined
07:29
Ven joined
10:04
lizmat joined
10:28
zakharyas joined
11:14
Ven joined
12:02
FROGGS[tab] joined
12:47
zakharyas joined
13:22
Ven joined
14:12
FROGGS[tab]_ joined
14:23
Ven joined
14:52
Ven joined
15:16
tokuhiro_ joined
|
|||
[Coke] | where is the mvm sub 'endprofile' implemented? | 15:17 | |
ah. it calls MVM_profile_end... | 15:18 | ||
and the question should have been "mvm op", not "mvm sub". my bad. | 15:20 | ||
15:26
lizmat joined
15:39
cognominal joined,
Ven joined
15:59
Ven joined
16:18
tokuhiro_ joined
|
|||
Ven | timotimo: %rip is the current instruction pointer, right? I don't understand why there's a word&long version of this register.. for historical purpose? | 16:26 | |
(asking timotimo++ because brrt++ isn't here :P) | |||
(erm, *next* instruction pointer) | |||
jnthn | .oO( It's called "rest in peace" 'cus that instruction will be retired soon... ) |
16:40 | |
17:11
Ven joined
17:33
Peter_R joined
17:42
FROGGS joined
18:19
tokuhiro_ joined
18:54
leont joined
19:06
vendethiel joined
19:38
tokuhiro_ joined
20:41
vendethiel joined
20:46
colomon joined
20:58
mtj_ joined
|
|||
leont | timotimo: hmm, that does sound suboptimal (re: say) | 21:12 | |
jnthn | The big problem we have with sync I/O things at the moment is that handles end up having thread affinity in libuv | 21:13 | |
For async I/O it's all async, so nothing is blocking on it, so it's fine we throw all the work in a queue and a single thread is dedicated to running the event loop | 21:14 | ||
But for sync I/O that model would create fairly intolerable overhead. | |||
I mean, if what we have now ain't tolerable, that ain't gonna be :) | |||
(The Moar async I/O event loop thread never actually runs any userland code at all, it just takes work from an input queue and shoves callbacks to run in the scheduler queue you give it with the task) | 21:16 | ||
In hindsight, shoving all of our sync I/O stuff also through libuv was probably not so wise. | 21:17 | ||
It seemed like a good idea to outsource handling platform specific things | |||
But given you can count the platforms we care to support in the near future on the fingers of one hand, and even then it's basically POSIX and Windows, that probably wasn't so big a win either. | 21:18 | ||
And the cost is that people pass sync handles around between threads and get really weird behavior. | |||
leont | I'm still observing async IO issues with Proc::Async :-/ | 21:19 | |
jnthn | Such as? | ||
leont | It works fine as long as I have only one of them | 21:20 | |
If I have two, things blow up | |||
jnthn | :/ | ||
leont | First is missing part of its data, second is hanging (and my signal handler isn't responding either) | ||
(for sigint) | |||
jnthn did have a Proc::Async thing running really nicely a month or two back | 21:23 | ||
Wonder what happened | |||
leont | I've never seen it work nice | ||
At least not with multiple processes | |||
jnthn | The thing I had ran a bunch of 'em. | ||
Heck, I used it to manage a load of concurrent scp processes about a year ago after I first added it, and it was pretty stable. | 21:24 | ||
jnthn should perhaps try things on more than one platform. :) | |||
leont | I don't know, maybe I'm doing something terribly wrong | ||
jnthn | Maybe, but it still sounds like something's wrong. | 21:25 | |
Is the thing you're trying to run anywhere I can try it out next week, once I'm back from teaching? | |||
bah, was that sentence even English... :) | |||
leont | Just pushed a testing branch to tap-harness | ||
jnthn | OK | 21:26 | |
Found it | |||
leont | It's triggered by running bin/prove6 (against any p6.t) | ||
jnthn | OK, will give it a try next week (or maybe at weekend) and see what I can reproduce. | 21:27 | |
leont | Thanks :-) | ||
jnthn | my $timer = $done.then({ now - $start-time }); | 21:28 | |
cute! | |||
leont | I'm planning to try to convert t/harness to it, and see what happens. Not having parallelization is suboptimal, but not a blocker at this stage I suppose. | 21:29 | |
jnthn | ah, but | ||
my $done = $process.start(); | |||
Are you using that promise to determine when the process ends? | 21:30 | ||
leont | Yes, the harness class is awaiting all those promises | ||
jnthn | OK | 21:32 | |
I'll have a look into it | 21:33 | ||
But, teaching tomorrow, so I'd better rest now :) | |||
'night | |||
timotimo | gnite jnthn! | ||
leont | Good night! | ||
21:41
tokuhiro_ joined
23:15
zakharyas joined
23:42
tokuhiro_ joined
|