github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
MasterDuke yeah, added a print if it's passed in, only happens once and then the segv 00:00
in the very first file of the nqp build 00:01
jnthn Can you breakpoint where we emit the sp_invoke_* in optimize.c and see if the spesh cand looks legit when we poke it into the slot? 00:02
This all looks annoyingly simple/correct :) 00:03
Which means when we figure out I'll feel silly
MasterDuke heh, yep
(gdb) p *cands_and_arg_guards->spesh_candidates[spesh_cand] 00:04
$1 = {common = {header = {sc_forward_u = {forwarder = 0x0, sc = {sc_idx = 0, idx = 0}, st = 0x0}, owner = 3, flags1 = 0 '\000', flags2 = 32 ' ', size = 216}, st = 0x55555558fc40}, body = {cs = 0x7ffff7deb6e0 <inv_arg_callsite>,
    type_tuple = 0x7ffff00063b0, discarded = 0 '\000', bytecode_size = 2180, bytecode = 0x7ffff00e93e0 <incomplete sequence \367>, handlers = 0x7ffff003c5f0, spesh_slots = 0x7ffff00e2f60, num_spesh_slots = 38, num_deopts = 96,
    deopts = 0x7ffff0051d00, deopt_count = 0, deopt_named_used_bit_field = 0, deopt_pea = {materialize_info = 0x0, materialize_info_num = 0, materialize_info_alloc = 0, deopt_point = 0x0, deopt_point_num = 0, deopt_point_alloc = 0},
    num_inlines = 3, inlines = 0x7ffff00e0d60, local_types = 0x7ffff00e0e10, lexical_types = 0x7ffff0002230, num_locals = 70, num_lexicals = 0, work_size = 640, env_size = 0, num_handlers = 6, jitcode = 0x0,
    deopt_usage_info = 0x7ffff00ea9e0}}
that looks better 00:05
btw, the segv is here github.com/MoarVM/MoarVM/blob/mast...ame.c#L300
huh. the spesh_cand_slot was 12 00:15
i then put a breakpoint in interp.c's sp_fastinvoke_o 00:16
right before the MVM_frame_invoke call
and continued 00:17
but GET_UI16(cur_op, 4) was 18
nm, that was after cur_op += 6; 00:19
GET_UI16(cur_op, 4) when getting the spesh slot was in fact 12 00:20
jnthn Hm, odd 00:31
I should sleep, 'night o/
MasterDuke same here
nwc10 good *, #moarvmable6 07:06
jnthn so bot 10:18
morning o/
nwc10 such morning 10:45
jnthn still has a grumpy arm :/ 10:46
patrickb hopes this is about some computer not behaving well, but fears it's about the bodypart. 10:53
nwc10 so bot | such morning | greet!
jnthn The bodypart, sadly 10:54
patrickb nwc: o/ there you go :-)
jnthn Thouugh I already have a work excuse to order an M1 :P 10:55
Was expecting to receive my new Ryzen box this week. Still haven't.
MasterDuke inspiration did not strike overnight. but hopefully a fresh look this afternoon will reveal something 12:10
so the candidate is fine when put into the spesh slot. but it's garbage as soon as it's taken out 12:12
nine which would indicate that the spesh slot does not get marked 12:22
MasterDuke i don't see anything that looks like other slots getting marked in optimize.c 12:26
nine So, how is this supposed to work? How does get stuff in spesh slots marked? We surely put MVMCollectables in there, don't we? 12:28
MasterDuke yeah, the signature is `MVM_spesh_add_spesh_slot(MVMThreadContext *tc, MVMSpeshGraph *g, MVMCollectable *c)`
nine Ah gc_mark, in MVMSpeshCandidate.c 12:33
MasterDuke those are different spesh slots, right? 12:43
github.com/MoarVM/MoarVM/blob/mast....h#L22-L23 vs github.com/MoarVM/MoarVM/blob/mast....h#L47-L48 12:46
oh github.com/MoarVM/MoarVM/blob/mast...ph.c#L1355 12:56
jnthn Spesh slots are always collectables 13:30
OK, back to the nested resumption... 13:33
MasterDuke hm, i recorded the segv in rr, then put a watch on g->spesh_slots[12] (i confirmed 12 was the right balue), but nothing else writes there before the segv... 14:09
tried a read watch, so far hits at github.com/MoarVM/MoarVM/blob/mast...ph.c#L1355 14:11
and then the segv 14:12
jnthn is still fixing compile errors :P 14:24
Only one left
Hurrah, now I get to hunt a segfault too :) 14:30
MasterDuke huh. i also went to interp.c right before the frame_invoke and put a watch and rwatch on tc->cur_frame->effective_spesh_slots[12] 14:31
and then reverse-continued
nwc10 jnthn: start valgrind and put the kettle on?
jnthn Nah, it was a silly one, already fixed :) 14:32
Now I've got a legit panic telling me I forgot to implement something
MasterDuke and if i'm following what happens correctly (not 100% guaranteed), the only access is github.com/MoarVM/MoarVM/blob/mast...ize.c#L116 (the realloc in MVM_spesh_add_spesh_slot) 14:33
so i believe the spesh cand is stuck in the slot, then there's an unrelated MVM_spesh_candidate_add that triggers a bunch of other spesh_optimize/spesh_inline calls. those in turn call MVM_spesh_add_spesh_slot, causing a realloc 14:36
and the candidate is lost
MasterDuke well, i think i confused myself, but now i'm trying something else 15:29
aiui, tc->cur_frame->effective_spesh_slots is assigned spesh_cand->body.spesh_slots. so we stick the spesh candidate into a spesh graph's slots, those are copied to a candidate, then they're assigned to the frame. eventually that frame is invoked and its spesh slots become the tc->cur_frame->effective_spesh_slots 15:34
so what i did is set a breakpoint when we stick the spesh candidate in to a slot in optimize.c. then i set a breakpoint on MVM_frame_invoke and then stepped into it until the spesh_cand was assigned (if ever) because that's where the effective_spesh_slots were gotten from. then i checked the number of slots, and if there were enough, looked at the 15:37
12th (cause that's the slot we stuck the candidate into back in optimize.c)
and there was never a spesh_cand that had enough slots and the 12th was the candidate stuck into that slot previously 15:39
and then we hit the segv
i still don't know why this is happening, but maybe the above triggers something for someone else 15:40
jnthn Hurrah, first dispatch resumption tests where we fall back on an outer level of resumption work :) 16:03
moritz congrats 16:04
"outer level of resumption" sounds like the precursor to outer hell, or so :-)
nine MasterDuke: you haven't pushed the new commits yet, have you? 16:09
jnthn :-D
The Rakudo commit to use all of that work is pleasingly small. github.com/rakudo/rakudo/commit/8d...f37dd3d532 16:18
nine
.oO(When I grow up, I want to be a real commit)
16:19
nwc10 indeed it is. One question, what is 2?
MasterDuke nine: just did 16:20
jnthn A value that wants to grow up to be a named constant, in this case meaning "we're doing a lastcall"
nwc10 "when I grow up I want to have a name"?
jnthn Something like that :)
nine MasterDuke: that's much more fun to run in rr than yesterday's GC_DEBUG=3 version :) 16:22
MasterDuke heh, tell me about it
though i've been noticing something odd. the backtraces in gdb and rr take a noticeable time to appear/print 16:23
i.e., i type bt, first frame or two prints, then ~two seconds later the rest of the backtrace appears
nine not here 16:25
MasterDuke: what I see in rr is that what we get out of MVMSpeshCandidate *spesh_cand = (MVMSpeshCandidate *)tc->cur_frame->effective_spesh_slots[GET_UI16(cur_op, 4)]; is not a spesh candidate at all, but an NQPMatch object 16:30
MasterDuke huh. how did you figure that out? 16:31
oh, btw, i haven't implemented "invalidating the caller"
nine Break in src/core/interp.c:6000, reverse-cont to that from the segfault, then p *spesh_cand->common.st 16:32
assuming that you also get a segfault in allocate_frame
MasterDuke colabti.org/irclogger/irclogger_lo...02-10#l115 16:33
yep
oh, nice find 16:34
MasterDuke re not having implemented invalidating the caller, that shouldn't be the problem right now, since no candidates have been removed before the segv 17:03