github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm Set by AlexDaniel on 12 June 2018. |
|||
00:01
sena_kun joined
00:03
Altai-man_ left
|
|||
timotimo | in that case, i wonder if compiling what deopt has to do to ops that can then be jitted rather than interpreted would be doable without making a big mess out of things | 00:33 | |
02:01
Altai-man_ joined
02:03
sena_kun left
04:02
sena_kun joined
04:03
Altai-man_ left
04:53
sena_kun left
04:54
sena_kun joined
06:01
Altai-man_ joined
06:04
sena_kun left
|
|||
nwc10 | good *, #moarvm | 06:55 | |
06:57
Kaeipi left
07:00
Kaiepi joined
07:47
zakharyas joined
08:02
sena_kun joined
08:03
Altai-man_ left
10:01
Altai-man_ joined
10:03
sena_kun left
11:35
zakharyas left
11:44
patrickb joined
12:01
sena_kun joined
12:03
Altai-man_ left
12:42
zakharyas joined
12:45
Kaiepi left
12:48
Kaiepi joined
13:02
Kaeipi joined
13:06
Kaiepi left
13:13
Kaeipi left
|
|||
jnthn | Ah....the revenge of lazy deopt | 13:46 | |
So at the point we find an exception handler...the frame we find it in may still be specialized | |||
But if there's lazy deopt, then by the time we actually reach it, then it won't be any more | 13:47 | ||
And ouch ouch ouch, it ain't just JIT | 13:49 | ||
It'll also have the specialized bytecode address wrong too | 13:50 | ||
Well, have the specialized address, not the deopt'd one | |||
13:58
lucasb joined
14:01
Altai-man_ joined
14:03
sena_kun left
|
|||
nine | We serialize method caches? That's a bit surprising... | 14:49 | |
jnthn | Yes, though they're going away with new-disp | 14:54 | |
14:55
Kaiepi joined
|
|||
jnthn | *sigh* I really don't get why the clever fix I thought of for the unwind issue causes all kinds of madness... | 14:55 | |
nine | the usual small detail throwing a wrench... | 14:58 | |
15:27
Kaiepi left
|
|||
Geth | MoarVM/new-disp: 4579f66547 | (Jonathan Worthington)++ | src/core/frame.c Have frame unwind use the return mechanism Since we perform lazy deopt on returning into a frame, we can't rely on the JIT or bytecode address we discovered when locating an exception handler still being meaningful after that has happened. So instead, set the return address in the caller to be those things, and then rely on the usual frame removal process to do the right thing, including in the lazy deopt case. |
15:43 | |
jnthn | Well, that's one SEGV down | ||
grmbl, I still have failures when running `make test` that I can't reproduce when I run the things alone | 15:44 | ||
[Coke] | does the error in make test go away with TEST_JOBS=1 ? | 15:46 | |
jnthn | Didn't try that yet | ||
MasterDuke | jnthn: fwiw, i get those on master. i.e., random fails in spectests, that i can't repro even when running that file in a loop for a lot of iterations. and it seems completely random which test file | 15:48 | |
jnthn | This is just test, not spectest | 15:49 | |
15:50
Kaiepi joined
|
|||
MasterDuke | ah. don't remember it happening in test on master | 15:50 | |
jnthn | Yeah, this is certainly a regression of sorts | 15:51 | |
Despite the mass of spectest fails (most 'cus I didn't wire up multi dispatch to the new dispatcher yet, it seems), I only see two segfaults | 15:54 | ||
The current one is...because we have no caller...wat | |||
nwc10 | ASAN makes no comment on the NQP test suite | 15:55 | |
jnthn | I observe that to be stable also | 15:56 | |
Is it happy with the Rakudo build? | |||
Hm, this one is a real curiosity | 15:58 | ||
16:01
sena_kun joined
|
|||
nwc10 | jnthn: it's working on that | 16:03 | |
16:03
Altai-man_ left
16:34
Kaiepi left
|
|||
nwc10 | jnthn: build is OK, many rakduo tests fail | 16:36 | |
16:36
zakharyas1 joined
16:38
zakharyas left
|
|||
nwc10 | tried one: Use of Nil in string context | 16:40 | |
jnthn | How many is many? | 16:43 | |
nwc10 | 54 i think | 16:44 | |
jnthn | Oh, tests, not test files | 16:45 | |
nwc10 | no, 54 test files | ||
out of 108 | |||
er, at least | |||
jnthn | OK, I see a single digit number | 16:46 | |
nwc10 | I realise now that tmux can get keen and skip output | ||
I have spesh | |||
and all the pain enabled in the environment | |||
jnthn | Me too, though only MVM_SPESH_BLOCKING=1 | ||
nwc10 | oh wait, am I supposed to update NQP? | ||
jnthn | If you don't have c0d4f2f289 then you'll probably have a bad time | 16:47 | |
Or at least, a less good one | 16:48 | ||
nwc10 | no, I think I had that | ||
jnthn | *sigh* this is so odd, it's somehow ending up in the same callframe after a return, rather than going back a frame... | 16:59 | |
Well, tomorrow | |||
17:19
MasterDuke left
17:54
zakharyas1 left
18:01
Altai-man_ joined
18:03
sena_kun left
19:45
zakharyas joined
20:01
sena_kun joined
20:03
Altai-man_ left
|
|||
timotimo | tbh sounds like maybe rr + watchpoints could give valuable insight here | 20:16 | |
20:27
zakharyas left
|
|||
nwc10 | jnthn: in src/gc/roots.c, shouln't the code that iterates over tc->instance->sc_weakhash hold the tc->instance->mutex_sc_registry while doing this? | 20:28 | |
timotimo | perhaps because it's GC, and all threads are supposed to be In It Togetherā¢, you don't need to? | ||
nwc10 | yes, that might be the answer | 20:29 | |
timotimo | if only one thread is responsible for marking the weakhash, that ought to be fine | ||
i do believe our marking and sweeping is in consecutive stages, i.e. every thread syncs up before all of them go to the second phase? | |||
actually i'm basing this on ... nothing whatsoever | |||
also, i wonder if it'd be interesting to store what percentage of time is spent in mark vs sweep in the profiler | 20:30 | ||
nwc10 | I don't know enough. Nothing you have said contracts my undestanding | ||
timotimo | that's a good way to say that :D :D | ||
nwc10 | but I was thinking something like your " i do believe our marking and sweeping " ... | ||
and thinking - but hey, can't one thread be doign the marking, including that hash, whilst some other thread decides to mutate it. | |||
timotimo | btw i just had a thought | 20:33 | |
then i forgot it again | |||
but i'm at least slightly certain it was a good thought | |||
jnthn | nwc10: I don't think so 'cus the GC is "stop the world"? | 20:36 | |
nwc10 | OK. I didn't know what got stopped when. | ||
I hope that I didh't wake you up :-) | 20:37 | ||
jnthn | No, have been cooking/eating :) | 20:39 | |
timotimo | hey jnthn are all commits needed to see the "return to the same callframe" issue already pushed? if so, which test file is it? | ||
jnthn | Yes, they are, but unfortunately I forget the test file :/ | 20:41 | |
(I'm on my home machine now, it's on my office machine) | |||
But basically run make spectest with MVM_SPESH_BLOCKING=1 and I only saw 2 SEGVs and it's one of them | |||
timotimo | does the crash look spectacular enough that i may be able to tell relatively quickly? | ||
ah, good | |||
jnthn | It's one about recursion and native arrays, iirc | 20:42 | |
An integration test | |||
timotimo | jnthn: were you ever interested in timing mark vs sweep on a per-thread basis for the profiler? | 20:44 | |
jnthn | I don't think so... :) | ||
timotimo | though i think if you have telemeh active you can see that | 20:45 | |
i totally need to write a telemeh-to-log-timeline translator so i actually have something to look at the output with | |||
jnthn | I ain't really done much with the GC in a long time, because I'm not often seeing it be The Bottleneck | ||
timotimo | or someone could put tracy into moarvm and replace telemeh with it | ||
jnthn | (I'm sure there are cases where it can be) | ||
timotimo | back when marking had to walk the entirety of the profile data tree, it could take rather A Time | 20:46 | |
i assume to get there i'd turn off spesh during core setting compilation? | 20:49 | ||
and maybe all of rakudo compilation actually | |||
just a little regret from not really having worked on the dispatcher program to spesh bytecode writer | 20:53 | ||
Stage parse : 202.543 | 20:56 | ||
woof! | |||
lizmat | meow! | 20:59 | |
timotimo | "t/spec/integration/deep-recursion-initing-native-array.t" that sounds good | 21:19 | |
slightly amused that this code results in two different specializations of init-array; one where the first argument is a scalar, the other where the second argument is a scalar | 21:39 | ||
because on each branch it has the - 1 on a different argument | |||
ah, bytecodedump is very unhappy with sp_dispatch_* having .s instead of .d | 21:56 | ||
22:01
Altai-man_ joined
22:03
sena_kun left
22:12
nebuchadnezzar left
|
|||
Geth | MoarVM/new-disp: cc8634af49 | (Timo Paulssen)++ | src/core/bytecodedump.c teach bytecodedump about sp_dispatch_* ops will really want to factor this out ... |
22:13 | |
timotimo | now is cooking and eating time, but having this fixed will surely help someone at some point | ||
(also throws out debug output from bytecode dump) | |||
this is perhaps not important, but somehow all comments that optimize_disp writes to the spesh graph have order 0, even though it's supposed to be using a counter on the spesh graph that gets incremented every time | 22:19 | ||
i'm finding myself just looking at disp stuff in general rather than hunting the segfault | 22:24 | ||
i'm not sure if this is relevant, but i see the sp_dispatch_o of infix:<+> have an annotation "logged: bytecode offset 182" but in the facts i see that at offset 184 it has "505 x spesh plugin guard index 0" | 22:30 | ||
("spesh" just because it's re-using the same kind of entry for spesh guard hits and dispatch guard outcomes) | |||
22:48
lucasb left
|
|||
timotimo | # [002] Deemed polymorphic | 23:02 | |
sp_dispatch_o r8(2), lits(raku-rv-typecheck), callsite(0x12b1240, 2 arg, 2 pos, nonflattening, interned), sslot(3), litui32(1), r6(3), r5(9) | |||
when removing 2 from the bytecode address in interp.c; which makes sense, since reading the opcode advances the cur_op by 2 already, right? | 23:05 | ||
jnthn | Yes | 23:06 | |
timotimo | want me to push that? | 23:07 | |
of course, apart from outputting whether it has never been dispatched or is deemed poly- or monomorphic it's not doing anything so far | 23:11 | ||
23:35
nebuchadnezzar joined
23:47
patrickb left
|