github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
nwc10 good *, #moarvm 05:13
MasterDuke hey hey hey. is github.com/MoarVM/MoarVM/pull/1344...1466-L1481 correct? are those comparisons still valid if the gc has moved the candidate that `g->cand` points to? 09:14
timotimo ah, interesting 11:45
i think since we turned cand into a collectable, the graph would never own the memory any more 11:47
hm 11:48
the point is that there may not be a candidate at all
in which case a destroyed spesh graph needs to clear its memory 11:49
MasterDuke hm. what i wasn't sure about was e.g., `g->lexical_types = cand->body.lexical_types;` in MVM_spesh_graph_create_from_cand and then `g->cand->body.lexical_types != g->lexical_types` in MVM_spesh_graph_destroy 11:54
timotimo we only have a few spots where gc is allowed to run in spesh 12:03
perhaps we just need a little refresh function that we call in those places
timotimo i'm not sure if we can just always create the MVmSpeshCandidate object; we probably have to create them in the middle of speshing when we want to inline something, and that's currently not possible 12:39
timotimo i.e. if we would allocate in the middle of speshing, GC would blow up in our faces 12:39
MasterDuke where's it being created? 12:41
MasterDuke that's problematic? if it was currently blowing up because of that there would be a different error, right? 12:42
should `g->cand = cand` be an MVM_ASSIGN_REF? 12:43
jnthn Doubt it, because the graph is not a collectable 12:44
tellable6 2020-09-17T16:49:51Z #raku <[Coke]> jnthn: can you make me an admin on rakudo/rakudo, if you're OK with that?
MasterDuke oh, right
jnthn Finally, I have time to work on MoarVM stuff :) 13:31
MasterDuke: I'm a bit out of sync on the spesh candidate refactor; is there something I can help debug at this point? 13:32
timotimo woohoo
MasterDuke jnthn: cool. well, nqp builds, but rakudo panics just about right away compiling core.c 13:33
colabti.org/irclogger/irclogger_lo...-09-13#l55 13:34
branch has been rebased to master, just pushed it 13:35
Geth MoarVM: 50d3311c75 | (Jonathan Worthington)++ | 8 files
Better specialize boolification of boxed Num

Previously this wasn't specialized at all, and so fell back on the late-bound `istrue` instruction. Now it optimizes into an unbox and a truth test on the unboxed float, which in turn can be JIT-compiled into a relatively short sequence of instructions.
13:39
MoarVM: 21992e6f0b | (Jonathan Worthington)++ (committed using GitHub Web editor) | 8 files
Merge pull request #1346 from MoarVM/optimize-boxed-float-boolification

Better specialize boolification of boxed Num
jnthn Goodness, I'm behind on PR review. 13:40
ah, should do some other task 13:49
afk for a bit, back soon
jnthn back 14:49
Stage start : 0.000 14:51
MoarVM panic: Internal error: zeroed target thread ID in work pass
Yup, repro'd :)
MasterDuke cool 14:57
jnthn Didn't find the issue so far, alas... 15:14
But hm, this is a bit fishy 15:17
MasterDuke oh?
jnthn (gdb) p *(MVMSpeshCandidate *)0x7fffec5a9bc0
$4 = {common = {header = {sc_forward_u = {forwarder = 0x0, sc = {sc_idx = 0, idx = 0}, st = 0x0}, owner = 3, flags1 = 0 '\000', flags2 = 42 '*', size = 208},
42 = 32 + 8 + 2
Oh, maybe it's OK
Yeah, I guess we don't clear MVM_CF_REF_FROM_GEN2 upon gen2 promotion 'cus we don't really need to 15:18
Geth MoarVM/oops-for-FSA-of-0: ed19b20dc3 | (Nicholas Clark)++ | src/core/fixedsizealloc.c
oops if MVM_fixed_size_alloc() is called for a size of 0 bytes.

Previously this wasn't trapped. With FSA_SIZE_DEBUG enabled (not the default) everything works (it can allocate "0" bytes, as it adds its debugging record keeping to the size requested). With default settings, we corrupt the malloc heap, which doesn't get reported at the time, and may or may not get reported in any useful way later.
15:59
MoarVM: nwc10++ created pull request #1349:
oops if MVM_fixed_size_alloc() is called for a size of 0 bytes.
16:00
nwc10 er, pun seems unavoidable - oops, I didn't rebase that first. 16:02
no release is tagged yet, is it? 16:03
timotimo not to my knowledge 16:04
jnthn So, at the point that we produce the candidate, the spesh slots seem OK. But if I put a loop into invoke over the spesh slots of the selected candidated and check none are in fromspace, a bunch of them turn out to be 16:12
However, the chosen candidate itself is not in fromspace 16:14
MasterDuke some are terrestrial, some aren't? doesn't sound like a great situation 16:24
jnthn Well, it means that somehow, somewhere, the spesh slots are getting messed up 16:30
Or rather, not marked 16:31
But how/why/where is quite a mystery
It's also odd just how persistent/stable the problem is over all kinds of different changes I attempt 16:42
MasterDuke guess i don't feel so bad i haven't found it yet myself 16:47
jnthn Yeah, I think I'll have to give up on it for today. Odd. :/ 16:55
MasterDuke thanks for looking. any suggestions for where i should experiment around? 16:56
jnthn I'm really out of ideas (if I had another one I'd alreaedy be trying it...) 17:04
jnthn Only to try and trace if the inter-gen set from the spesh thread is being mishandled somehow but I can't see how that could happen 17:04
I mean, it's as if the spesh slots just don't get GC marked in some GC run 17:05
But how exactly is quite the mystery
MasterDuke yeah, harder to debug the absence of something 17:10
Geth MoarVM/oops-for-FSA-of-0: 29052d4dfa | (Nicholas Clark)++ | src/core/fixedsizealloc.c
oops if MVM_fixed_size_alloc() is called for a size of 0 bytes.

Previously this wasn't trapped. With FSA_SIZE_DEBUG enabled (not the default) everything works (it can allocate "0" bytes, as it adds its debugging record keeping to the size requested). With default settings, we corrupt the malloc heap, which doesn't get reported at the time, and may or may not get reported in any useful way later.
17:18
nwc10 rebase!
nine jnthn: FWIW I've looked through the code systematically and have spent considerable time in rr to trace things and still haven't found the real reason. And yes, I'm baffled as well why all the clear fixes didn't change the behaviour a bit 17:45
Geth MoarVM/hash-single-allocation: 16 commits pushed by (Nicholas Clark)++
review: github.com/MoarVM/MoarVM/compare/3...8c07baf825
18:24
MoarVM: nwc10++ created pull request #1350:
Hash allocation as a single memory block
18:29
brrt nwc10: I'll have a look at it 18:32
nwc10 you might need peril sensitive sunglasses 18:33
lizmat
.oO( but but but brrt only has one head ? )
18:41
brrt usually 18:42
nwc10 does build on sparc64 so can't be *that* crack fuelled. 19:25
I can't spell
lizmat Yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2020/09/21/2020-...l-results/ 20:17
lizmat github.com/MoarVM/MoarVM/issues/1351 21:08