Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
MasterDuke any particular reason gc worklists aren't allocated with the fsa? 08:50
nine I guess they just predate the fsa
MasterDuke ok. i'll have a go a switching them 08:52
i have a wip branch that converts a bunch of MVM_*alloc to alloca or fixed_size_*, converting things that heaptrack says cause a lot of temporary allocations 08:55
by far the largest is set_size_internal, but for that i need to finish up my branch converting VMArray to use a single block for body and storage 09:00
if there's a comment that "technically this free isn't threadsafe", then converting to fixed_size_free and using the _at_safepoint version will in fact make the free threadsafe, correct? 09:30
timo if it's not threadsafe because two threads could free it at the same time, then no, it would still be bad 09:55
MasterDuke ah 09:57
dogbert17 ===SORRY!=== Error while compiling /home/dogbert/repos/rakudo/t/spec/S06-signature/introspection.rakudo.moar 10:22
No such private method 'SET-SELF' on Map
at /home/dogbert/repos/rakudo/t/spec/S06-signature/introspection.rakudo.moar:4
boom
jnthnwrthngtn o/ 10:29
dogbert17 good morning 10:30
coffee brewing? 10:31
jnthnwrthngtn Brewed :) 10:34
lizmat I've been thinking about the 2021.11 release and how we're going to probably not able to do that 10:38
I propose we move the release 2 weeks forward and make it an early 2021.12 release
and then skip to 2022.01 release on the 3rd Sat of January 10:39
so we all get a bit more time during the holiday season
jnthnwrthngtn *nod* 10:41
Has anyone volunteered to take on the release manager role yet (or a few, to rotate between, to reduce pressure on any individual)?
lizmat not yet quite firmly I don't think 10:42
JRaspass I'm always amazed that Ruby launches on xmas day, I guess they get it ready sufficiently before hand 10:46
Geth MoarVM: MasterDuke17++ created pull request #1608:
Convert temp allocations into alloca or use FSA
10:50
MasterDuke the vast majority of the remaining temporary allocations are from set_size_internal (which is sort of unavoidable) and MVM_VECTOR_(INIT|GROW) 12:12
MasterDuke could those MVM_VECTOR_* macros be converted to use the fsa? 12:12
jnthnwrthngtn MasterDuke: Not completely easily, in so far as various things assume they are just malloc'd memory that is then taken over and separately managed 12:49
(So everything doing that would need updating also)
MasterDuke hm. haven't even gotten quite that far yet. still trying to deal with the vectors hanging off the instance, where there isn't a tc 12:50
dogbert17 dogbert@dogbert-VirtualBox:~/repos/rakudo$ MVM_SPESH_NODELAY=1 ./rakudo-m -Ilib t/spec/S04-blocks-and-statements/let.t 12:51
MoarVM panic: Adding pointer 0x55f4a15dd310 to past fromspace to GC worklist
oops
a rather small nursery but still
#define MVM_NURSERY_SIZE (5 * 1024) 12:52
will we be able to get to the 'root' of this problem? 12:53
might this gist reveal any clues? gist.github.com/dogbert17/810f1471...e90bb57a19 12:57
ggoebel Are the Perl 5 integration tests typically run during the release? That's not mentioned in the release guide 14:07
lizmat Perl 5 integration tests are run in every spectest if Inline::Perl5 is installed, afaik 14:08
so maybe it should mention to install Inline::Perl5 ? 14:09
afk&
sena_kun ggoebel, this is part of the `rakudo` task, so yes. 14:26
in the rakudo task there are subtasks including rakudo-inline, see github.com/rakudo/rakudo/blob/mast...#L117-L120
MasterDuke oh, when a vector is assigned somewhere, i can't then just MVM_VECTOR_DESTROY(new_thing), because there won't be new_thing_(num|alloc). ugh 14:28
ggoebel sena_kun: where are currently known flappers documented? 14:33
sena_kun ggoebel, flapper modules or flappers in tests? 14:35
ggoebel flappers in tests 14:43
sena_kun ggoebel, there is a meta-ticket at github.com/rakudo/rakudo/issues/4212 and additions are welcome. 14:45
nine dogbert17: that GC issue looks like a missing root, somehow involving slurpy nameds (which would explain the rarity) 15:10
dogbert17 nine: at least it is relatively easy to repro 15:19
nine indeed 15:22
Though not quite trivial to trace back to where things start to go wrong
dogbert17 (gdb) p MVM_dump_string(tc,cs->arg_names[i]) 16:11
default
dogbert17 nine: I could be wrong but it seems to be the last part of the test file which is causing problems: github.com/Raku/roast/blob/master/.../let.t#L88 16:28
nine The broken hash key is "default" and the value is Nil. So, yes 16:31
dogbert17 any theories as of yet? 16:32
MasterDuke oh. nqp just built with MVM_VECTOR_* using the fsa. however, i do have to have the jit disabled... 16:33
nine dogbert17: not yet 16:34
dogbert17 nine: could it possibly have anything to do with the merge of New disp nativecall (#1595) ? 18:29
nine dogbert17: don't see how. I've had a mild suspicion about dispatcher-replace-arg-syscall but wasn't able to confirm it either 18:42
dogbert17 I rebuilt a few older versions of Moarvm (nothing else) and the problem seems to disappear with github.com/MoarVM/MoarVM/commit/10...d15a995165 18:46
nine Err.... what the hell happened here? github.com/MoarVM/MoarVM/commit/59...9630f85141 19:32
Why is this one gigantic commit with a jumbed up commit message?! 19:33
ugexe looks like it was squashed and merged
nine lizmat: why did you squash my branch? I spent _hours_ cleaning up commits, amending them and ensuring that there is a clear history that can be bisected properly 19:35
Geth MoarVM/new-disp-nativecall-libffi: 35 commits pushed by (Stefan Seifert)++, (Nicholas Clark)++, (Timo Paulssen)++
review: github.com/MoarVM/MoarVM/compare/0...0bb7b31497
lizmat nine: I squashed because I thought that was what you wanted :-( 19:36
nine Why would I want that? It makes everything worse :(
lizmat ok, shall I revert the squashed commit and re-apply the PR ?
nine Yes, please 19:37
Geth MoarVM: e9ce9ea7f4 | (Elizabeth Mattijsen)++ | 44 files
Revert "New disp nativecall (#1595)"

This reverts commit 592cc85489d42901dce96032820be49630f85141.
19:38
lizmat nine: github.com/MoarVM/MoarVM/pull/1595...-970269666 19:39
I waited for about an hour after that :-(
nine Looking at that PR there are several messages I haven't seen before, because I didn't get them via email. Including jnthn's approval 19:41
Geth MoarVM: lizmat++ created pull request #1609:
New disp nativecall libffi
19:43
lizmat nine: well, there's a new PR now 19:44
if you are ok with merging that now, please let me know, or do the merge yourself 19:45
and sorry, sorry for the misunderstanding :-(
nine I think this branch now needs to be rebase, otherwise it won't have an effect
Geth MoarVM/new-disp-nativecall-libffi: 35 commits pushed by (Stefan Seifert)++, (Nicholas Clark)++, (Timo Paulssen)++
review: github.com/MoarVM/MoarVM/compare/9...614b05ca5e
lizmat rebased 19:46
nine is glad the rebase just went through without conflicts then
lizmat Using index info to reconstruct a base tree...
Msrc/spesh/manipulate.c
Falling back to patching base and 3-way merge...
Auto-merging src/spesh/manipulate.c
was the only problematic one
nine Ah, yes, because master got a fix there after the merge 19:47
Geth MoarVM/master: 35 commits pushed by (Stefan Seifert)++, (Nicholas Clark)++, (Timo Paulssen)++
review: github.com/MoarVM/MoarVM/compare/e...614b05ca5e
nine git log 19:48
dogbert17: can you please re-do your test and just skip commit 592cc85489d42901dce96032820be49630f85141? Maybe that will give us some more information
lizmat: thanks!
lizmat: thanks! 19:49
lizmat I will bump MoarVM again, I guess
all bumped 20:09
nine dogbert17: got it! 20:19
It is, indeed MVM_capture_replace_arg
dogbert17 nine++ 20:20
timo uh oh
Geth MoarVM: 81082e1c36 | (Stefan Seifert)++ | src/6model/reprs/MVMCapture.c
Fix possible access to fromspace after MVM_capture_replace_arg

A callsite can contain pointers to the names of named arguments. These pointers can get outdated if a newly allocated and populated callsite is neither interned, nor part of any other GC rooted data structure.
MVM_capture_replace_arg first created a new callsite, then allocated a capture. This could lead to outdated pointers to argument names. Fix by reversing the order. The callsite is needed only later anyway.
20:26
nine Luckily this is one of those hard to find but trivial to fix ones. Just had to move a few lines of code around.
lizmat so this would fix NativeCall's issues ?
nine No, nothing to do with NativeCall 20:28
fixes this one: gist.github.com/dogbert17/810f1471...e90bb57a19
lizmat ah, ok :-) 20:29
timo oh my 20:30
capture_replace_arg can be used i pretty much any dispatch program 20:31
dogbert17 perhaps this fixes other bugs as well
[Coke] nine++ 20:35
dogbert17 the complex.t bug was not fixed by this, ah well it was worth a shot
MasterDuke ah ha! built rakudo with the MVM_VCECTOR_* macros using the fsa (still with the jit disabled though) 20:37
almost 1M reduction in temporary allocations when compiling CORE.c.setting (but now 1.2gb leaked, so obviously something isn't quite right) 20:41
nine Ha! That TCP::LowLevel bug is not a bug in NativeCall at all. It's actually a fix in NativeCall. 20:44
github.com/jmaslak/Raku-TCP-LowLev...98b56dd474
So as far as NativeCall is concerned, there's only the DB::SQLite JIT issue left 20:45
[Coke] Nice. 20:46
timo if fileame contais SQLite, bail the jit 20:49
nine Well, worst case we can indeed disable JIT of native calls for the release. Would hurt a lot, but we'd still be way faster than the previous release. 20:56
lizmat nine: don't know if you saw my ruminations about the release earlier today? 20:57
logs.liz.nl/moarvm/2021-11-18.html#10:38
japhb timo: I'd call that the "video driver method of solving application conflicts"
nine lizmat: ah, yes, I do like that idea 20:58
japhb FWIW, "push the release" seems like a good idea to me
[Coke] +1 from me to space things to a) get better product b) avoid burnouot 20:59
nine On that thought, I think I'll retire for the day and attack that JIT issue with a fresh mind tomorrow 21:00
lizmat nine++
[Coke] we don't have anyone with an M1 yet, correct? (looking at the mac mini 2000 with an M1 right now) 23:40
lizmat I do think we have people with an M1, just not with MacOS Monterey? 23:43
patrickb I have an M1 sitting around. Willing to give people access. 23:52