Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
00:02 reportable6 left 00:49 cognominal joined 00:50 cognominal_ left 00:57 linkable6 joined 01:02 CaCode_ joined 01:03 reportable6 joined 01:04 CaCode left 02:04 committable6 left, greppable6 left, sourceable6 left, notable6 left, linkable6 left, quotable6 left, benchable6 left, reportable6 left, statisfiable6 left, nativecallable6 left, evalable6 left, shareable6 left, bisectable6 left, unicodable6 left, coverable6 left, squashable6 left, bloatable6 left, tellable6 left, releasable6 left 02:05 nativecallable6 joined 02:06 tellable6 joined, bloatable6 joined 02:07 bisectable6 joined, shareable6 joined, squashable6 joined 03:05 greppable6 joined 03:06 reportable6 joined, statisfiable6 joined 03:07 squashable6 left, linkable6 joined, committable6 joined, quotable6 joined 04:05 sourceable6 joined 04:06 unicodable6 joined, evalable6 joined 04:07 coverable6 joined, notable6 joined 04:12 vrurg left, vrurg joined 05:06 benchable6 joined, releasable6 joined 05:09 squashable6 joined 06:02 reportable6 left 06:03 reportable6 joined 06:04 CaCode- joined 06:07 CaCode- left, CaCode_ left 06:11 CaCode joined 06:12 CaCode left, CaCode joined
nine Ah...after fixing 3 bugs in MVM_spesh_manipulate_split_BB_at (and a couple more in my own code) it finally compiles and runs. But the code is still not correct. Still need to use new register versions and add a PHI node where the code branches merge. 08:14
Good news is that performance doesn't seem to be affected measurably 08:16
09:17 squashable6 left
nine Getting register versions and the PHI node in place wasn't that bad :) 09:54
jnthnwrthngtn moarning o/ 09:57
Oh yes, upholding SSA form is a bit of effort :)
lizmat closed github.com/rakudo/rakudo/issues/4595 10:02
jnthnwrthngtn github.com/rakudo/rakudo/pull/4591 has a couple of thumbs up, but not any review comments yet; will leave it a bit longer, but would be good to get it merged and some testing of it sooner rather than later 10:12
nine jnthnwrthngtn: I'm kind of in the middle of this other hefty read. One minor comment so far 10:14
"other" as opposed to the PR I reviewed yesterday 10:16
jnthnwrthngtn :D 10:17
Sorry for writing so much code :P
nine No worry. I will be getting back at you with new-disp-nativecall :D 10:18
jnthnwrthngtn :D 10:23
nine Speaking of SSA... I'm investigating a strange issue here. The native call is immediately followed by a guard on the return value. But it's guarding for type Nil which can never be true. In fact we know that it must be Pointer 10:37
jnthnwrthngtn Strange indeed; what do the logged type stats say? 10:43
Ah, since a native call is maybe not a normal return, I wonder if there are ever any types logged there... 10:44
nine For PC 388 which is dispatch_o r17(3), lits(lang-meth-call), callsite(0x98d8970, 9 arg, 9 pos, nonflattening, noninterned), ... aka the native call it says 184 x type Nil (TypeObj) 10:45
Then we hllize the return value of the native call and suddenly its 184 x type NativeCall::Types::Pointer (Conc)
jnthnwrthngtn OK, if the stats are wrong then it's broken well before SSA and facts come into it 10:46
nine Which indicates that you're on the right track with missing loggin? 10:47
MVM_args_set_result_obj logs return types while MVM_args_set_dispatch_result_obj (which is what MVM_nativecall_dispatch is using) doesn't 10:49
jnthnwrthngtn Sorta but the odd thing here is that we are logging...something
10:49 Altai-man joined
jnthnwrthngtn I mean, there's a Nil, so where does it come from 10:50
Hmm...now trying to remember why we have a set_dispatch_result_obj :) 10:52
One has to be careful with callgrind data... My re-working of special return so far shows consistent slight wallclock time improvements over both my home and office machine...but callgrind reports a slight increase in instructions retired. 10:54
nine Not all instructions are equally expensive?
jnthnwrthngtn For sure, plus cache effects and so on 10:55
Still, quite often the two correlate.
lizmat in the interest of bisectability, would it make sense to bump MoarVM before merging github.com/rakudo/rakudo/pull/4591 11:00
jnthnwrthngtn lizmat: Yes, also nine++ is still reviewing that PR, so let's not hurry with it. 11:01
lizmat ok, going to bump now 11:03
nine Adding spesh logging to MVMDispOpcodeResultForeignCode helps, but I do not have a clue where that Nil in the logs comes from. It's still there. 11:07
Now the real good news is that with the additional logging, we get down to 10.300s! That's a whooping 40 % below 2021.09. And it's not like NativeCall used to be terribly slow 11:09
jnthnwrthngtn Wowser 11:11
nine Btw. that's reading a million lines of CSV
jnthnwrthngtn Plus a few more percent from frame changes, most likely
nine I haven't rebased my branch yet. Looking forward to the new numbers :)
Also we'll be able to remove the args stuff
11:12 Altai-man left, CaCode left, bartolin left, Nicholas left, Geth left, TempIRCLogger left, Util left, JRaspass left, gfldex left
jnthnwrthngtn Yes :D 11:13
Also, that callstack record for nested interpreters is REALLY nice because it lets me toss a boatload of return-time checks
nine :) 11:14
It's always nice when bugfixes lead to simpler and faster code
11:15 Altai-man joined, CaCode joined, bartolin joined, Nicholas joined, Geth joined, TempIRCLogger joined, Util joined, JRaspass joined, gfldex joined 11:19 squashable6 joined
Geth MoarVM/new-disp-nativecall: bcef508b12 | (Stefan Seifert)++ | src/spesh/manipulate.c
Fix MVM_spesh_manipulate_split_BB_at leaving duplicate BB idxs

MVM_spesh_manipulate_split_BB_at didn't take inlining into account when renumbering BB indexes. Inlining can cause BBs with higher indexes to appear before the current BB.
MoarVM/new-disp-nativecall: 5c1bcddfa1 | (Stefan Seifert)++ | src/spesh/manipulate.c
Fix MVM_spesh_manipulate_split_BB_at leaving bogus num_succ on new BB
MoarVM/new-disp-nativecall: 5669645a10 | (Stefan Seifert)++ | src/spesh/manipulate.c
Fix "reverse postorder calculation failed" after MVM_spesh_manipulate_split_BB_at

Need to update the number of BBs in the graph after adding a new one. Otherwise sanity checks will fail.
lizmat sanity check: 11:27
Geth MoarVM/new-disp-nativecall: 5295382e35 | (Stefan Seifert)++ | 3 files
Support JIT compilation of native calls with rw args

For rw args we have spesh generate code to decont them to natively typed registers before the call and assign the values from the natively typed registers back to the lex refs after the call. The JIT then just passes pointers to the natively typed registers.
lizmat after bumping NQP and MoarVM, stopping and starting the IRC log server should just work, right? 11:28
it should just recompile all modules and that's it, right?
Altai-man I already don't like the premise...
lizmat Missing or wrong version of dependency 'gen/moar/stage2/NQPHLL.nqp' (from 'site#sources/45334C557865A97D1ECA0D3F3A3FAF2017FCE553 (OO::Monitors)')
I've been seeing that a lot lately, but always attributed it to switching between versions and stuff 11:29
jnthnwrthngtn Well, if you install a new NQP you'd also need to install a new Rakudo
lizmat this time, I'm sure I had a clean Rakudo build
jnthnwrthngtn ??
Geth MoarVM/new-disp-nativecall: 7ce28199d4 | (Stefan Seifert)++ | 4 files
Fix NULL pointer results getting boxed after native calls

NativeCall returns a type object for NULL pointer results. When lowering sp_runnativecall_o to sp_runnativecall_i to facilitate JIT compilation, we need to preserve this behaviour. We therefore emit new ops to check the return value for non-zero before boxing. Otherwise we just assign the return type object to the result.
lizmat this is using --gen-moar on the Configure.pl
as a "normal" user might do?
jnthnwrthngtn OK, then I guess it's all taken care of
lizmat well, it's not, then :-(
jnthnwrthngtn Assuming you ran make install in Rakudo too, but I think you'd see much more failures.
lizmat yes, I ran --make-install
I'll run a make install again, just to make sure 11:31
jnthnwrthngtn OK, then no idea, although I've also seen an increasing number of pre-comp related reports
nine has never trusted Configure.pl with these things. Manual install FTW!
lizmat fwiw, it is **always** OO::Monitors it is complaining about
(no change after make install) 11:32
nine jnthnwrthngtn: we do not log the result types of syscalls either 11:33
lizmat re-installing OO::Monitors (zef uninstall / zef install)
does not make a difference
this just makes me worry about production installations :-(
nine lizmat: OO::Monitors might simply be the first module it's trying to load
lizmat: has Configure.pl actually re-compiled and installed rakudo after installing an updated nqp? 11:34
lizmat could be... but it's also the one module playing meta protocol tricks
Altai-man lizmat, FWIW nowadays people use docker for web production envs.
lizmat nukes nqp subdir, rebuilds and goes off for lunch
nine Altai-man: some people do. Others don't 11:39
jnthnwrthngtn lizmat: I don't believe OO::Monitors does anything especially tricky, tbh, I suspect it's just deepest in the dep chain or some such 11:40
Geth MoarVM/new-disp-nativecall: 55bd7ff33e | (Stefan Seifert)++ | src/disp/program.c
Log types of native routine's return values.

This would usually be handled by MVM_args_set_result_obj, but we cannot use that as it would assume that the callee got its own frame on the callstack.
Note: in addition to these logs we seem to be logging Nil results. No idea where those come from. Needs investigation.
MoarVM/new-disp-nativecall: 9c4b70ff22 | (Stefan Seifert)++ | src/jit/graph.c
Support JIT compilation of native calls with VMArray arguments
11:43 dogbert11 joined 11:44 [Coke]_ joined 11:51 Voldenet_ joined 11:52 frost left, jdv left, dogbert17 left, [Coke] left, ilogger2 left, Voldenet left, Voldenet_ is now known as Voldenet, jdv joined 11:58 ilogger2 joined
lizmat and completely rebuilding rakudo after nuking the nqp subdir does *not* fix the problem 12:01
nine what the?
lizmat nukes site and starts installing modules again 12:02
12:02 reportable6 left
lizmat and even after that, it complains about OO::Monitor :-( 12:11
nine Do you by any chance have multiple rakudo installations?
lizmat I try very much to only have *one* to prevent these types of issues 12:12
lizmat is going to rinse and repeat with a clean rakudo and site and ugexe++'s latest patch 12:15
ugexe your error is referencing site# so i dont think my changes should be related 12:16
(my changes affect file#)
lizmat I realize that, but installation processes use -I don't they ?
nine no
lizmat well, let's see what this experiment brings :-) 12:17
FWIW, with ugexe's patch, I *am* able to re-install modules needed for the log server by just doing "zef install App::Rakui::Log 12:30
nine: but the testing process does, does it not? 12:42
I mean, when I do a zef install, I see the module being compiled at least twice
nine ah, yes 12:53
13:02 evalable6 left, linkable6 left 13:11 [Coke]_ is now known as [Coke] 14:03 evalable6 joined 14:04 linkable6 joined, reportable6 joined
[Coke] nine: anything you want me to test on windows? 14:10
Geth MoarVM/new-special-return: 16 commits pushed by (Jonathan Worthington)++
review: github.com/MoarVM/MoarVM/compare/6...6427d7bdfa
MoarVM: jnthn++ created pull request #1581:
Migrate special return to callstack and simplify return handling
timo "conservatively tired", what a mood 14:55
jnthnwrthngtn haha, fixed 14:57
I am indeed tired of Conservatives... :)
The 5% improvement on recursive fib is on top of the improvement from ->work and ->env going onto the callstack 14:58
+256 ?302 15:00
Nice, this gets rid of more than it adds :)
17:10 patrickb joined 17:20 Altai-man left 17:57 patrickb left 18:02 reportable6 left
nine jnthnwrthngtn: unbelievable! I rebase new-disp-nativecall and I got down to 9.356s! 18:27
This is now > 50 % faster than 2019.08 for which the lowest number I could measure was 14.380s 18:35
Geth MoarVM/new-disp-nativecall: 27 commits pushed by (Stefan Seifert)++
review: github.com/MoarVM/MoarVM/compare/9...334714924b
timo redonculous 18:38
MasterDuke wow
timo i wonder how much i'd feel in my SDL::Raw examples 18:40
nine So far I haven't even used the profiler. Just did that the first time and noticed that we're spending a ridiculous amount of time in Inline::Perl5::Array's constructor which is just method new(:$ip5 is raw, :$p5 is raw, :$av is raw) { my \arr = self.CREATE; arr.BUILD(:$ip5, :$p5, :$av); arr } submethod BUILD(:$!ip5 is raw, :$!p5 is raw, :$!av is raw) { } 18:41
So maybe there's even some low hanging fruit there
18:43 CaCode left 18:57 squashable6 left 18:58 squashable6 joined, squashable6 left, squashable6 joined 19:06 reportable6 joined
japhb Yeah, that seems needlessly overhead-y out of context -- almost feels like there's either missing context or some interesting history that led to that constructor. 19:06
lizmat nine: any reason why you didn't do a custom .new on that taking 3 positionals ? 19:07
and do the attribute assignments / binding directly using nqp::bindattr ? 19:08
nine Because nqp is an implementation detail, not a public API
Wowser... 8.125s by just turning those attributes into public ones and getting rid of the custom constructor 19:21
MasterDuke nice 19:22
lizmat creeping up on being 2x as fast :-) 19:25
[Coke] nine++ 19:26
nine That .2s startup penalty becomes more and more of an issue
19:27 CaCode joined 19:31 CaCode left, CaCode joined
lizmat a bare 'use Inline::Perl5' comes in at .425 seconds for me :-( 19:31
19:32 CaCode left
lizmat a bare 'use Test' at .240 19:32
so what makes that difference ?
19:32 CaCode joined
lizmat lots of submodules ? 19:32
/ dependencies?
19:33 CaCode left
nine It's not that many 19:33
19:33 CaCode joined
nine second run any different? 19:33
19:33 CaCode left
lizmat nope 19:34
basicall running 'time raku -e 'use Inline::Perl5'
19:35 CaCode joined, CaCode left, CaCode joined 19:36 CaCode left
jnthnwrthngtn m: say 1.629 / 1.741 19:36
camelia 0.935669
jnthnwrthngtn ~6% win from new-special-return on recursive fib on my home box.
lizmat gen/moar/stage2/NQPCORE.setting:527 seems to be doing 10% of the time of loading Inline::Perl5 (being called 3.5K+ times) from a --profile-compile 19:37
MasterDuke `No such method 'as' for invocant of type 'Any'` when i do `zef install Inline::Perl5 --force-install`. didn't someone have a similar problem recently?
19:38 CaCode joined
lizmat --exclude=perl ? 19:38
MasterDuke that got farther, but why is it trying to install 0.50 now when it shows 0.56 available in the ecosystem? 19:39
lizmat ah, that line in NQPCORE.setting is 'nqp::dispatch( .... ' 19:47
MasterDuke ok, now `time raku -e 'use Inline::Perl5'` works and gives me ~0.3s 19:48
19:49 dogbert11 left
jnthnwrthngtn lizmat: It's the NQP method dispatcher (called to bind a method callsite in NQP code) 19:49
19:50 dogbert11 joined
lizmat yeah.. got that :-) 19:51
very red in the profile
but I guess that's a bootstrap issue :-)
nine sp_getlex_ins r11(1), lex(idx=4,outers=0,$index) 19:52
const_i64_16 r12(4), liti16(-1)
eq_i r12(5), r11(1), r12(4)
Oh the efficiency!
hllbool r18(2), r12(5) 19:53
sp_getspeshslot r21(0), sslot(14) # [014] Start of dispatch program translation
sp_runcfunc_i r11(2), r21(0), liti64(139730461907904), r18(2)
unless_i r11(2), BB(16) # [062] start of exprjit tree
You had it right there! No need to go through all that additional work :( 19:54
jnthnwrthngtn nine: The sp_runcfunc_i is probably from the boolification handling
nine it is
jnthnwrthngtn nine: For syscalls we could do with a "here's now to specialize this better than the call" mechanism 19:55
nine It just feels like in this case we still failed to communicate the semantics and optimization opportunities to the vm
jnthnwrthngtn I'm not sure how well it understands hllbool, tbh
But ideally it'd spot this is a box/unbox style situation 19:56
nine It'd have to catch it before we turn the understandable boot-boolify into the opaque sp_runcfunc_i 19:57
jnthnwrthngtn nine: I don't think so; sp_runcfunc_i could instead be translated to something simpler 19:58
We just don't have the mechanism for that yet 19:59
lizmat: Another thing: the cost of the method dispatcher may also be because it's the first thing that requires deserialization of meta-objects in order to have the method table to do the lookup in
And that happens lazily on first request for the meta-object
So that may be a part of it too 20:00
lizmat and I guess Inline::Perl5 does a lot of that
compared to Test
jnthnwrthngtn Yeah, though this is curious because it's NQP's method dispatcher, not Rakudo's
lizmat well, it's the same for "use Test" 20:03
so the fact that it's NQP's method dispatcher is not due to Inline::Perl5
jnthnwrthngtn I don't know the pre-comp logic well, but if I did `raku -e 'use Cro::HTTP::Router'` should I be surprised to see time spent in `is-up-to-date` (SETTING::src/core.c/CompUnit/PrecompilationStore/File.pm6:108)? 20:05
There's no -Ilib, I don't have a PERL6LIB or RAKULIB set, etc.
lizmat I think you should be, I am at least 20:06
jnthnwrthngtn Yeah, I thought using a CURI avoided that
MasterDuke what's the difference between `tc->cur_frame->static_info->body.cu` and `jg->sg->sf->body.cu`? 20:09
nine We do compare the checksums of dependencies we have in our header against the checksums recorded in the dependency's header
MasterDuke first one from interp.c and second one from emit.dasc
nine MasterDuke: with a lot of assumptions about the context of your question, they should probably point at the same comp unit 20:12
MasterDuke cool. working on jitting getcurhllsym 20:14
Geth MoarVM: MasterDuke17++ created pull request #1582:
Lego JIT of getcurhllsym
lizmat that looks pretty easy and pretty cool! 20:54
MasterDuke huh. windows doesn't like ^^^. looks like a similar problem to all those other ops i jitted that failed on windows 21:29
22:16 dogbert11 left 22:19 dogbert11 joined
MasterDuke i have it caught in windbg, but it's jitted code on windows, so not entirely sure what's going on i.imgur.com/QyxWvoX.png 22:41
23:47 evalable6 left, linkable6 left 23:49 linkable6 joined