Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
00:03 reportable6 left 00:05 reportable6 joined 00:08 Kaiepi joined 00:10 Kaiepi left, Kaiepi joined 00:17 Kaiepi left, Kaiepi joined 00:26 Kaiepi left, Kaiepi joined 00:44 Kaiepi left, Kaipi joined 00:47 Kaipi left 00:48 Kaipi joined 00:51 frost joined 01:01 Kaipi left, Merfont joined 01:08 Merfont left, Merfont joined 01:42 Merfont left 01:43 Kaiepi joined 01:59 Kaiepi left, Kaiepi joined 02:20 evalable6 joined 02:33 Kaiepi left, Kaipi joined 02:36 Kaipi left 02:37 Kaiepi joined 03:09 Kaipi joined, Kaiepi left 03:16 Kaipi left 04:30 gfldex left, Voldenet left, JRaspass left, gfldex joined, JRaspass joined 04:31 Voldenet joined 05:27 Kaiepi joined
Nicholas good *, #moarvm 05:51
06:02 reportable6 left
japhb Good *, Nicholas 06:03
06:03 reportable6 joined 07:07 Kaiepi left, Kaiepi joined 08:02 cognominal_ joined 08:03 cognominal_ joined 08:05 cog__ left 09:05 linkable6 left, evalable6 left, linkable6 joined 09:07 evalable6 joined 09:12 RakuIRCLogger left, RakuIRCLogger joined 09:44 sena_kun left
jnthnwrthngtn moarning o/ 09:50
lizmat o/ jnthnwrthngtn
Nicholas \o 09:52
09:53 Kaipi joined, Kaiepi left
jnthnwrthngtn MoarVM VMIL talk tomorrow: 2021.splashcon.org/home/vmil-2021#program 09:55
lizmat better link: 2021.splashcon.org/details/vmil-20...g-language :-) 09:57
Mondenkind '6' nice 09:59
10:04 Kaipi left 10:08 Kaiepi joined
Geth MoarVM: 74c35679a3 | (Stefan Seifert)++ | 5 files
Fix segfault when thread deopts while another tries to find a dynamic

Commit e94893f897495f3de16d4898279c87e356058acb fixed a possible segfault in the frame walkercaused by a concurrent deopt in a different thread. The fix was to use a local copy of the frame's spesh_cand pointer. However MVM_spesh_deopt_find_inactive_frame_deopt_idx which is called by the same code also accesses the frame's spesh_cand. So this function could still end up ... (5 more lines)
10:25
MoarVM: 80f8edf947 | (Stefan Seifert)++ | src/spesh/frame_walker.c
Fix segfault in frame walker when other thread clears the frame's extra

Similar to another thread clearing a frame's spesh_cand leading to issues fixed in commit 74c35679a3fba707ad695c53c28918aca30b95b5, it's also possible that another thread clears a frame's extra, right after we checked it for non-NULL and before we try to dereference it.
Fix by taking a local copy of the pointer before we check and dereference it.
MoarVM: ba7df1bfc8 | (Jonathan Worthington)++ (committed using GitHub Web editor) | 5 files
Merge pull request #1568 from MoarVM/fix_segfault_finding_dynamics_after_deopt

Fix segfault finding dynamics after deopt
10:38 sena_kun joined
Geth MoarVM: b80ad65ed2 | (Jonathan Worthington)++ | 4 files
Include dispatcher name in full inline cache log

To make it much easier to work out which ones are in need of adding a megamorphic dispatch handler.
11:05
11:05 camelia left 11:34 camelia joined 11:35 squashable6 left 12:02 reportable6 left 12:04 linkable6 left 12:37 squashable6 joined
jdv is there an easy way to watch this vmil presentation? 12:41
12:49 linkable6 joined
lizmat looks like it will become available as a video at some time 12:49
jnthnwrthngtn I've no idea about watching it live, but I think there will be a public video afterwards 12:58
Nicholas sneak into jnthnwrthngtn's flat?
jnthnwrthngtn Well yes, that would be a way to watch it live tomorrow :P 13:02
13:23 frost left 14:05 reportable6 joined 14:10 otokan joined 14:17 otokan left
jnthnwrthngtn m: sub foo(&code) { code() }; multi m1() { 0 }; proto m2() { {*} + 1 }; multi m2() { 41 }; say foo(&m1); say foo(&m2) 14:22
camelia 0
42
dogbert17 m: my @m = [1,2,3],[2,6,10],[3,12,21]; for @m -> @r { my $div = @r[0]; @r X/= $div; my $x = 0 }; dd @m 14:30
camelia Array @m = [[1.0, 2.0, 3.0], [1.0, 3.0, 5.0], [1.0, 4.0, 7.0]]
dogbert17 m: my @m = [1,2,3],[2,6,10],[3,12,21]; for @m -> @r { my $div = @r[0]; @r X/= $div }; dd @m
camelia Array @m = [[1, 2, 3], [2, 6, 10], [3, 12, 21]]
dogbert17 bizarre 14:32
lizmat and another Rakudo Weekly News hits the Net: rakudoweekly.blog/2021/10/18/2021-...ning-with/
dogbert17 yay, lizmat++ 14:33
nine Weekly is definitely the nicest part of Monday :) 14:37
japhb lizmat: In addition to the emotes thing that I mentioned yesterday over in #raku, I noticed that links in the logs have a rollover color that is the same as normal text (so the clickability seems to disappear when you're about ready to click) 14:38
lizmat japhb: please make an issue for that :-) 14:39
japhb OK, where's the repo?
lizmat japhb: look at the bottom of the page: github.com/lizmat/App-Raku-Log/issues
afk for a bit& 14:40
japhb *smacks forehead* Yup, too early for proper thinking
jnthnwrthngtn lizmat++ 14:48
japhb Three issues added, thanks for the push. :-)
jnthnwrthngtn lizmat++ 14:51
one before reading it and one after :P 14:52
Currently playing with adding a megamorphic handler to raku's raku-call-meth dispatcher
Turns out that for methods, given the dispatch programs are rejected really quickly when there's a varying type, you can pile up surprisingly many dispatch programs (surprising to me) before hitting the point that an O(1) hash lookup wins 14:55
nine How many is that? 15:02
jnthnwrthngtn Crossover point seems to be towards 32 entries
nine Darn....I just wanted to say my guess is 30 after almost posting 20 :)
But then, you already pointed out that it's surprisingly large 15:03
jnthnwrthngtn I don't think I'll actually set the threshold that high, since there's memory considerations to be had as well
But 16 entries is maybe reasonable, while I'd originally been thinking 4 or 8
nine I guess the 1 in O(1) is quite large after all for hash lookups 15:04
jnthnwrthngtn The costs of hitting the hard limit on inline caches and thus making throwaway dispatch programs is quite huge, however 15:06
nine Btw totally unrelated but interesting. CPython may lose its GIL after all: lwn.net/SubscriberLink/872869/0db525a6300781dc/
jnthnwrthngtn For example, I set the hard limit to 32 and did a benchmark where a callsite did 100,000 method calls on 50 different types. Took 20.8s before, but just 2.3s with my the addition of a megamorphic handler. 15:07
nine That's....quite a speedup :) 15:08
Geth MoarVM/new-disp-nativecall-part1: 196803ea4b | (Stefan Seifert)++ | src/6model/reprs/NativeCall.c
Fix segfault due to uninitialized entry_point in native calls

Cloning a NativeCall site could leave it with initialized lib_handle, but an entry_point of NULL if the original wasn't built yet. The check for whether we have to initialize tha callsite before we can proceed with the call however only looks at lib_handle. Since we unconditionally created a lib_handle when cloning (even if the original didn't have one), we wouldn't initialize entry_point and end up with a segfault.
Fix by only creating a new lib_handle when the original we're cloning is actually fully initialized.
15:12
MoarVM/new-disp-nativecall-part1: 3ac6c6fff3 | (Stefan Seifert)++ | src/6model/reprs/NativeCall.c
Fix missing data on clones of NativeCall sites

We never copied sym_name and arg_info, so we could end up with segfaults when trying to initialize (i.e. load library and symbol) a clone of a NativeCall site.
MoarVM/new-disp-nativecall-part1: c10c3aebc7 | (Stefan Seifert)++ | 3 files
Fix nested runloops leaving call stack in confused state

Certain actions (like return) unwind the call stack to the next record representing a call frame. When returning from a nested runloop (e.g. a NativeCall callback) however, we must only unwind the records added by the code running in the nested runloop. Otherwise we can end up unwinding records that are still needed by a currently running dispatch in the outer ... (7 more lines)
MoarVM: niner++ created pull request #1569:
New disp nativecall part1
15:14
jnthnwrthngtn Hm, odd, I've busted S03-metaops/hyper.t apparently 15:20
Geth MoarVM/new-disp-nativecall-part1: dffe84014d | (Stefan Seifert)++ | 3 files
Fix nested runloops leaving call stack in confused state

Certain actions (like return) unwind the call stack to the next record representing a call frame. When returning from a nested runloop (e.g. a NativeCall callback) however, we must only unwind the records added by the code running in the nested runloop. Otherwise we can end up unwinding records that are still needed by a currently running dispatch in the outer ... (7 more lines)
15:21
15:25 Xliff left
Geth MoarVM/new-disp-nativecall-part1: 907a73b835 | (Stefan Seifert)++ | 4 files
Fix nested runloops leaving call stack in confused state

Certain actions (like return) unwind the call stack to the next record representing a call frame. When returning from a nested runloop (e.g. a NativeCall callback) however, we must only unwind the records added by the code running in the nested runloop. Otherwise we can end up unwinding records that are still needed by a currently running dispatch in the outer ... (7 more lines)
15:27
nine This CI thing is really good at catching my stupid mistakes
jnthnwrthngtn++ too :) 15:31
[Coke] not that anyone was worried, but my windows build is still working. :) 15:41
CI++ 15:42
Geth MoarVM: 4310520f0e | (Jonathan Worthington)++ | src/spesh/inline.c
Drop provably pointless sp_resumption ops

When we perform an inlining, and there is no way that it can deopt, that further implies there's no way it could trigger a resumption. In that case, we do not need the sp_resumption instructions. While they are very cheap to run, they may cause usages of other instructions also, which can lead to further instruction deletions. This can in turn bring more ... (6 more lines)
17:11
17:14 otokan joined
jnthnwrthngtn home time & 17:18
17:18 sena_kun left
Nicholas ./rakudo-m -Ilib t/02-rakudo/03-cmp-ok.t goes SEGV on unknown address 0x000000000010 (pc 0x7f680fec6258 bp 0x7fffd1818520 sp 0x7fffd1818350 T0) 17:54
#0 0x7f680fec6257 in MVM_disp_program_run src/disp/program.c:2950
seems to be a NULL pointer dereference
needs MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1
nine Could be related to this? dev.azure.com/MoarVM/MoarVM/_build...d8969918b3 17:55
MasterDuke i can repro gist.github.com/MasterDuke17/44cac...1155111183 17:58
18:03 reportable6 left
jnthnwrthngtn Neat, seems that with the monomorphic branch the slowdown in Agrammon on new-disp is solved. 18:37
And it's still slowing caches, so maybe once I get those sorted out it'll be faster than before 18:38
MasterDuke what about that example Mondenkind had the other day?
nine Intruiging. That segfault requires MVM_SPESH_BLOCKING and NODELAY but doesn't actually happen in speshed code 19:00
jnthnwrthngtn MasterDuke: Very unlikely to help with that. 19:12
Or at least, not with the main problem it's constructed to expose.
MasterDuke oh wait, you're talking about something other than that rakudo megamorphic-handlers branch? 19:14
nine Oh, it's PEA again. 19:45
We're segfaulting in unspeshed code because we deopted into it. And we failed to materialize the object that we're guarding
19:46 sena_kun joined
nine PEA: OPT: eliminated an allocation of Scalar into r4(3) 20:00
PEA: OPT: eliminated an allocation of Scalar into r4(5)
But nothing about materialization in the deopt log 20:01
Also nothing about that in the spesh log
20:03 reportable6 joined
nine Oh, looking at the right spesh log helps... 20:09
lizmat smiles here also 20:18
[Coke] . o O (?)
lizmat no worries... :-) 20:19
nine So, we're deopting at sp_guardconc r6(24), r6(6), sslot(1), litui32(70) # [011] Inserted to use specialization 20:24
This op carries 2 annotations:
[Annotation: INS Deopt One Before Instruction (idx 70 -> pc 218; line 272)] 20:25
[Annotation: INS Deopt Synth (idx 68)]
For the Synth point we have the required materializations: At 68 materialize 0 into r0, At 68 materialize 1 into r1, At 68 materialize 1 into r4
But the guardconc deopts to the Deopt One Before point 20:26
jnthnwrthngtn MasterDuke: Sorry, went to eat. The improvement is from the megamorphic-handlers branch, but it deals with megamorphic method dispatch primarily, and the example is a megamorphic multi dispatch. 20:29
MasterDuke ah
21:03 linkable6 left, evalable6 left 21:17 sena_kun left 21:22 sena_kun joined 21:37 squashable6 left 21:40 squashable6 joined
Geth MoarVM: MasterDuke17++ created pull request #1570:
Fixing warnings seen when compiling with GCC using -O2 (instead of the -O3 that the build defaults to)
21:57
22:06 sena_kun left 22:16 otokan left 22:18 sena_kun joined 23:05 evalable6 joined 23:06 linkable6 joined 23:14 colemanx joined