Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes.
Set by lizmat on 24 May 2021.
00:02 reportable6 left 01:02 sourceable6 left, squashable6 left, statisfiable6 left, quotable6 left, nativecallable6 left, coverable6 left, bisectable6 left, unicodable6 left, releasable6 left, notable6 left, tellable6 left, evalable6 left, committable6 left, benchable6 left, shareable6 left, linkable6 left, bloatable6 left, greppable6 left 01:03 coverable6 joined 01:04 shareable6 joined, statisfiable6 joined, squashable6 joined, releasable6 joined, nativecallable6 joined 01:05 unicodable6 joined, reportable6 joined, sourceable6 joined, tellable6 joined, notable6 joined
timo when i "perf record -g rakudo fannkuch.p6" my computer immediately freezes :( 01:28
02:03 benchable6 joined, bloatable6 joined, greppable6 joined 02:04 quotable6 joined, bisectable6 joined, linkable6 joined 02:05 committable6 joined 03:16 coverable6 left, linkable6 left, bloatable6 left, greppable6 left, unicodable6 left, nativecallable6 left, bisectable6 left, shareable6 left, sourceable6 left, notable6 left, squashable6 left, tellable6 left, committable6 left, benchable6 left, statisfiable6 left, releasable6 left, quotable6 left, reportable6 left 03:17 quotable6 joined, linkable6 joined, greppable6 joined 03:18 reportable6 joined, statisfiable6 joined, benchable6 joined, tellable6 joined 03:19 releasable6 joined, shareable6 joined, committable6 joined 04:18 sourceable6 joined 04:19 nativecallable6 joined 05:05 evalable6 joined 05:17 coverable6 joined, squashable6 joined, bloatable6 joined 05:18 unicodable6 joined 06:02 reportable6 left 06:03 reportable6 joined 06:19 notable6 joined
Geth MoarVM/new-disp: 449656ef17 | (Daniel Green)++ | src/spesh/inline.c
Handle sp_runbytecode_[ins] in rewrite_obj_return

Implemented by changing return_to_set into return_to_op and passing in the needed op.
MoarVM/new-disp: 52cbd533ce | MasterDuke17++ (committed using GitHub Web editor) | src/spesh/inline.c
Merge pull request #1524 from MasterDuke17/new-disp_plus_sp_runbytecode_s_fix_in_rewrite_obj_return

Handle sp_runbytecode_[ins] in rewrite_obj_return
07:19 linkable6 left, evalable6 left
MasterDuke timo: is fannkuch.p6 available somewhere to test with? 07:20
nine Ah, the day shift has arrived :D
07:21 linkable6 joined
Nicholas 24/7 morning requires shift working 07:22
08:17 bisectable6 joined
nine Inline::Perl5's git master now works just fine on new-disp :) 09:32
lizmat wow! 09:34
nine Turns out that rebasing the new-disp branches onto their respective masters is quite trivial. Only conflicts are in _REVISION files and one with the nqp-configure subrepository 11:14
With that done, there are just 8 spectest files failing. And one of them is due to a bug that's also in master (which is just better at hiding it) 11:18
11:18 patrickb joined
nine That's including the S01-perl-5-integration tests which all pass 11:19
(needs the just released Inline::Perl5 0.54)
12:02 reportable6 left 12:21 evalable6 joined
dogbert17 oops, a SEGV 12:23
all I have right now is this, gist.github.com/dogbert17/08ff0599...3d8924bed, and the fact that it was t/spec/S07-hyperrace/for.t which failed 12:26
I have now updated the gist. Looks like 'tss' is busted. 12:42
to repro run 'while ./rakudo-m -Ilib t/spec/S07-hyperrace/for.t; do :; done'. I have MVM_GC_DEBUG=1 and a 128k nursery, dunno if that's necessary. 12:44
it seems to be the last test (#6) in the file which messes things up 12:48
13:03 reportable6 joined
jnthnwrthngtn nine: I'm really glad to hear Inline::Perl5 already works. :) 13:07
nine jnthnwrthngtn: so you don't have to dig into my horrible code :D 13:08
jnthnwrthngtn: though at some point I'd like to discuss options for using new-disp to my advantage there
jnthnwrthngtn nine: More that I worried it'd be poking into a lot of things that have changed and be a real pain to fix up
Yes, that and also how to do much better at NativeCall ;) 13:09
Which will help...no small number of things including Inline::Perl5
nine I did try my very best to be forwards compatible when poking into internals. I fell just short of the goal 13:10
jnthnwrthngtn About the rebases: there'll never be a perfect time to push them, but if you've done the work anyway now is probably alright. I have no outstanding local work that I'll have to juggle.
nine pushed
Geth seems to have missed pushes to MoarVM and rakudo though 13:11
jnthnwrthngtn Hmm 13:13
After the Rakudo becase I run Configure and:
Unknown macro insert_list
wow, s/becase/rebase/ :D 13:14
nine jnthnwrthngtn: please pull again
jnthnwrthngtn That was quick! Fixed, thanks.
nine Looks like the rebase left the submodule update as a local change, despite me resolving the merge conflict. Submodules will always be weird I'm afraid 13:15
MasterDuke would now be a good time to re-run update_ops and re-bootstrap to get the smaller files? 13:17
nine Are smaller files by themselves useful? 13:18
jnthnwrthngtn MasterDuke: Personally I'd wait a bit longer, until we fully eliminate legacy method caches and invocation protocl
MasterDuke well, i assume it'd help keep more stuff in cache 13:19
jnthnwrthngtn: sure, np
jnthnwrthngtn MasterDuke: Since that will need a rebootstrap also before we can totally drop it in MoarVM
MasterDuke ah yeah, then no reason to do it before then
jnthnwrthngtn As a heads up for everyone wondering about new-disp scheduling: next week is the last week I'll be about here doing new-disp stuff before vacation, then I'll be away for 2 weeks. 13:20
Also I still have to finish my talks for RakuConf :P 13:21
MasterDuke speaking of, are most of the people here attending?
jnthnwrthngtn m: say 1340 / 1349 13:33
camelia 0.993328
jnthnwrthngtn Nice :)
I should decide how to fix the signature unpacking based multi dispatch
MasterDuke that and the X::Multi::NoMatch seems like it might get to a passing spectest 13:35
nine dogbert17: I only saw that segfault once, before I compiled with debug symbols. Trying to reproduce it in rr, I run into deadlocks if anything 13:36
dogbert17 nine: that's annoying. I assume that the gist is of limited help 13:50
nine dogbert17: I guess fixing a dead lock is as important as fixing a segfault :) 13:57
dogbert17 I can only agree :)
nine And I happen to know where the deadlock is coming from
dogbert17 that was fast
nine In github.com/MoarVM/MoarVM/blob/mast.../log.c#L37 we're marking the thread blocked while holding the sl->body.block_mutex. In github.com/MoarVM/MoarVM/blob/mast...ker.c#L196 we're trying to lock the sl->body.block_mutex without the thread being marked blocked for GC. 14:00
Now if some other thread at that point has decided that a GC run is in order, send_log will wait for the GC run. But the GC run cannot start until the spesh thread joins in. Which will never happen, since that's still waiting for the sl->body.block_mutex 14:01
dogbert17 oops 14:02
easy fix? 14:04
nine looks like 14:17
Gah...cought a segfault now, but probably a different one. Caused by an object getting collected prematurely 14:21
dogbert17 once again you're falling into a rabbit hole :) 14:29
nine Hey, that's not funny :D
Geth MoarVM/fix_spesh_log_gc_deadlock: d4a8093ebc | (Stefan Seifert)++ | src/spesh/worker.c
Fix deadlock by untimely GC in multi-threaded programs

In send_log we're marking the thread blocked while holding the sl->body.block_mutex. In the spesh worker we're trying to lock the sl->body.block_mutex without the thread being marked blocked for GC. Now if some other thread at that point has decided that a GC run is in order, send_log will wait for the GC run. But the GC run cannot start until the spesh ... (5 more lines)
MoarVM: niner++ created pull request #1526:
Fix deadlock by untimely GC in multi-threaded programs
MasterDuke nine: has your current work on mutexes inspired any better solutions re gist.github.com/MasterDuke17/e74be...8ce02e4547 14:43
? 14:44
15:12 evalable6 left, linkable6 left
nine Not yet :/ 15:12
MasterDuke likewise, sadly 15:33
nine MasterDuke: so, I see two ways of moving forward: first, check if we actually need to hold both mutexes in that place. Maybe that's just a side effect of how the code is factored and can be avoided. If we need them, turn the single ex_release_mutex pointer into a static array with 2 slots. Only check the 2nd one if the 1st one is occupied. 16:05
So in the common case of no mutex acquired at all, we still only need to check one pointer
MasterDuke yeah, i was wondering if a variable size array was really necessary 16:07
nine I seriously hope its not 16:08
16:15 evalable6 joined 16:19 Kaiepi left
[Coke] Let me know when I should re-try the windows build; didn't see a commit specifically about the dynamic array alloc. 16:34
17:24 Kaiepi joined 17:51 squashable6 left, squashable6 joined 18:02 reportable6 left 18:49 Geth left, Geth joined 19:14 linkable6 joined
MasterDuke [Coke]: you could try this patch gist.github.com/MasterDuke17/213ff...4083e02bac 19:15
20:05 reportable6 joined
MasterDuke Nicholas: is it possible that MVM_uni_hash_demolish doesn't completely free the hash? 20:18
i just looked a `valgrind --leak-check=full` report of compiling CORE.e with `--full-cleanup` and it shows a bunch of leaks from hashes, even ones that are explicitly demolished in MVM_vm_destroy_instance 20:25
21:31 patrickb left 21:39 raydiak_ joined
timo it's kinda difficult to figure out stuff going on in a callgrind recording because all the jit frames are just memory addresses, though perhaps if you also output the perf map you can map them by hand? 22:48
23:02 linkable6 left, evalable6 left 23:03 evalable6 joined
[Coke] next windows build failure after masterdukes patch: 23:45
gist.github.com/coke/8bffe7f4823e2...cf07f7d782 - err-2.out 23:46
doesn't seem to like that syntax at all. 23:48