github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:14 klapperl joined 01:03 nwc10 left 04:34 nativecallable6 left, sourceable6 left, committable6 left, greppable6 left, bisectable6 left, tellable6 left, releasable6 left, statisfiable6 left, reportable6 left, coverable6 left, linkable6 left, unicodable6 left, squashable6 left, bloatable6 left, notable6 left, quotable6 left, evalable6 left, shareable6 left, benchable6 left, linkable6 joined, shareable6 joined, committable6 joined, reportable6 joined, tellable6 joined 04:35 releasable6 joined, quotable6 joined, nativecallable6 joined 04:36 benchable6 joined, statisfiable6 joined, squashable6 joined, unicodable6 joined, sourceable6 joined, bloatable6 joined 04:37 evalable6 joined, greppable6 joined, notable6 joined, bisectable6 joined, coverable6 joined 05:12 nwc10 joined
nwc10 good *, #moarvm 05:13
06:37 Altai-man joined 07:03 sena_kun joined 07:05 Altai-man left 07:27 zakharyas joined 07:33 zakharyas1 joined 07:36 zakharyas left 07:42 zakharyas1 left 07:46 zakharyas joined 08:06 domidumont joined 08:09 zakharyas1 joined 08:11 Kaiepi joined 08:12 zakharyas left 08:38 brrt joined
MasterDuke hey hey hey. is github.com/MoarVM/MoarVM/pull/1344...1466-L1481 correct? are those comparisons still valid if the gc has moved the candidate that `g->cand` points to? 09:14
10:00 leont joined 10:23 brrt left 11:02 Altai-man joined 11:05 sena_kun left 11:27 zakharyas1 left
timotimo ah, interesting 11:45
i think since we turned cand into a collectable, the graph would never own the memory any more 11:47
hm 11:48
the point is that there may not be a candidate at all
in which case a destroyed spesh graph needs to clear its memory 11:49
MasterDuke hm. what i wasn't sure about was e.g., `g->lexical_types = cand->body.lexical_types;` in MVM_spesh_graph_create_from_cand and then `g->cand->body.lexical_types != g->lexical_types` in MVM_spesh_graph_destroy 11:54
timotimo we only have a few spots where gc is allowed to run in spesh 12:03
perhaps we just need a little refresh function that we call in those places
12:31 zakharyas joined
timotimo i'm not sure if we can just always create the MVmSpeshCandidate object; we probably have to create them in the middle of speshing when we want to inline something, and that's currently not possible 12:39
12:39 zakharyas1 joined
timotimo i.e. if we would allocate in the middle of speshing, GC would blow up in our faces 12:39
MasterDuke where's it being created? 12:41
12:42 zakharyas left
MasterDuke that's problematic? if it was currently blowing up because of that there would be a different error, right? 12:42
should `g->cand = cand` be an MVM_ASSIGN_REF? 12:43
jnthn Doubt it, because the graph is not a collectable 12:44
tellable6 2020-09-17T16:49:51Z #raku <[Coke]> jnthn: can you make me an admin on rakudo/rakudo, if you're OK with that?
MasterDuke oh, right
12:44 brrt joined 12:48 zakharyas joined 12:49 brrt` joined 12:50 zakharyas1 left 12:51 brrt left 12:52 Kaiepi left
jnthn Finally, I have time to work on MoarVM stuff :) 13:31
MasterDuke: I'm a bit out of sync on the spesh candidate refactor; is there something I can help debug at this point? 13:32
timotimo woohoo
MasterDuke jnthn: cool. well, nqp builds, but rakudo panics just about right away compiling core.c 13:33
colabti.org/irclogger/irclogger_lo...-09-13#l55 13:34
branch has been rebased to master, just pushed it 13:35
Geth MoarVM: 50d3311c75 | (Jonathan Worthington)++ | 8 files
Better specialize boolification of boxed Num

Previously this wasn't specialized at all, and so fell back on the late-bound `istrue` instruction. Now it optimizes into an unbox and a truth test on the unboxed float, which in turn can be JIT-compiled into a relatively short sequence of instructions.
13:39
MoarVM: 21992e6f0b | (Jonathan Worthington)++ (committed using GitHub Web editor) | 8 files
Merge pull request #1346 from MoarVM/optimize-boxed-float-boolification

Better specialize boolification of boxed Num
jnthn Goodness, I'm behind on PR review. 13:40
ah, should do some other task 13:49
afk for a bit, back soon
14:29 vrurg_ is now known as vrurg 14:32 ggoebel joined 14:38 Kaiepi joined
jnthn back 14:49
Stage start : 0.000 14:51
MoarVM panic: Internal error: zeroed target thread ID in work pass
Yup, repro'd :)
MasterDuke cool 14:57
15:00 brrt` left 15:03 sena_kun joined 15:05 Altai-man left
jnthn Didn't find the issue so far, alas... 15:14
But hm, this is a bit fishy 15:17
MasterDuke oh?
jnthn (gdb) p *(MVMSpeshCandidate *)0x7fffec5a9bc0
$4 = {common = {header = {sc_forward_u = {forwarder = 0x0, sc = {sc_idx = 0, idx = 0}, st = 0x0}, owner = 3, flags1 = 0 '\000', flags2 = 42 '*', size = 208},
42 = 32 + 8 + 2
Oh, maybe it's OK
Yeah, I guess we don't clear MVM_CF_REF_FROM_GEN2 upon gen2 promotion 'cus we don't really need to 15:18
Geth MoarVM/oops-for-FSA-of-0: ed19b20dc3 | (Nicholas Clark)++ | src/core/fixedsizealloc.c
oops if MVM_fixed_size_alloc() is called for a size of 0 bytes.

Previously this wasn't trapped. With FSA_SIZE_DEBUG enabled (not the default) everything works (it can allocate "0" bytes, as it adds its debugging record keeping to the size requested). With default settings, we corrupt the malloc heap, which doesn't get reported at the time, and may or may not get reported in any useful way later.
15:59
MoarVM: nwc10++ created pull request #1349:
oops if MVM_fixed_size_alloc() is called for a size of 0 bytes.
16:00
nwc10 er, pun seems unavoidable - oops, I didn't rebase that first. 16:02
no release is tagged yet, is it? 16:03
timotimo not to my knowledge 16:04
jnthn So, at the point that we produce the candidate, the spesh slots seem OK. But if I put a loop into invoke over the spesh slots of the selected candidated and check none are in fromspace, a bunch of them turn out to be 16:12
However, the chosen candidate itself is not in fromspace 16:14
MasterDuke some are terrestrial, some aren't? doesn't sound like a great situation 16:24
jnthn Well, it means that somehow, somewhere, the spesh slots are getting messed up 16:30
Or rather, not marked 16:31
But how/why/where is quite a mystery
It's also odd just how persistent/stable the problem is over all kinds of different changes I attempt 16:42
16:46 domidumont left
MasterDuke guess i don't feel so bad i haven't found it yet myself 16:47
jnthn Yeah, I think I'll have to give up on it for today. Odd. :/ 16:55
MasterDuke thanks for looking. any suggestions for where i should experiment around? 16:56
jnthn I'm really out of ideas (if I had another one I'd alreaedy be trying it...) 17:04
17:04 domidumont joined
jnthn Only to try and trace if the inter-gen set from the spesh thread is being mishandled somehow but I can't see how that could happen 17:04
I mean, it's as if the spesh slots just don't get GC marked in some GC run 17:05
But how exactly is quite the mystery
MasterDuke yeah, harder to debug the absence of something 17:10
Geth MoarVM/oops-for-FSA-of-0: 29052d4dfa | (Nicholas Clark)++ | src/core/fixedsizealloc.c
oops if MVM_fixed_size_alloc() is called for a size of 0 bytes.

Previously this wasn't trapped. With FSA_SIZE_DEBUG enabled (not the default) everything works (it can allocate "0" bytes, as it adds its debugging record keeping to the size requested). With default settings, we corrupt the malloc heap, which doesn't get reported at the time, and may or may not get reported in any useful way later.
17:18
nwc10 rebase!
17:34 zakharyas left 17:45 domidumont left
nine jnthn: FWIW I've looked through the code systematically and have spent considerable time in rr to trace things and still haven't found the real reason. And yes, I'm baffled as well why all the clear fixes didn't change the behaviour a bit 17:45
18:16 brrt` joined
Geth MoarVM/hash-single-allocation: 16 commits pushed by (Nicholas Clark)++
review: github.com/MoarVM/MoarVM/compare/3...8c07baf825
18:24
MoarVM: nwc10++ created pull request #1350:
Hash allocation as a single memory block
18:29
18:31 brrt` is now known as brrt
brrt nwc10: I'll have a look at it 18:32
nwc10 you might need peril sensitive sunglasses 18:33
lizmat
.oO( but but but brrt only has one head ? )
18:41
brrt usually 18:42
18:44 zakharyas joined 19:02 Altai-man joined 19:05 sena_kun left 19:21 brrt left
nwc10 does build on sparc64 so can't be *that* crack fuelled. 19:25
I can't spell
lizmat Yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2020/09/21/2020-...l-results/ 20:17
20:39 zakharyas left 20:57 brrt joined
lizmat github.com/MoarVM/MoarVM/issues/1351 21:08
21:27 Altai-man left 21:30 brrt left