Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes. Set by lizmat on 24 May 2021. |
|||
00:02
reportable6 left
00:05
reportable6 joined
00:12
linkable6 left
00:13
linkable6 joined
02:16
squashable6 left
02:19
squashable6 joined
02:23
squashable6 left
02:25
squashable6 joined
02:53
linkable6 left
02:56
linkable6 joined
04:56
leedo left,
leedo joined
05:24
dogbert17 left
05:25
dogbert17 joined
06:02
reportable6 left
06:05
reportable6 joined
06:33
patrickb joined
07:00
patrickb left
|
|||
Nicholas | good *, #moarvm | 07:44 | |
08:10
lizmat left,
lizmat joined
|
|||
nine | Dry morning! | 08:11 | |
08:26
Geth joined
10:13
hankache joined
10:56
hankache left
|
|||
timo | and a dry morning to you | 11:00 | |
11:01
sena_kun joined
|
|||
sena_kun | I wonder if new-disp makes the CPU starved somehow, when I run spectest on master the load average is about 32 as it should be, but with new-disp it's about 6-7. | 11:36 | |
I suspect that's known as there is no optimizer yet. | |||
lizmat | interesting... | 11:42 | |
I don't really see how that would matter, though | |||
timo | i wonder what magic incantation you can use for perf to get info about this | 11:50 | |
like, do you want to sample like cache wait times or so? | |||
11:51
AlexDaniel left
11:54
AlexDaniel joined
|
|||
sena_kun | timo, I don't think I want to, though I can if guided. I am obviously under-qualified to think about any of this, so just stating what I see as someone who has to run tests. | 12:00 | |
12:02
reportable6 left
12:03
reportable6 joined
|
|||
nine | I don't think it can be cache waits, as those are too low level. Load average is the average number of processes that have been runnable from the scheduler's point of view in the given time frame. | 12:10 | |
I don't think the scheduler knows when a process is waiting on memory. That changes much too fast and too often. | |||
So if load average is low despite many processes "running" many of them must be in sleep state, i.e. waiting for some external factor like a timer or connection or something like that. | 12:11 | ||
That's if there are that many processes in the first place. At least the Perl harness isn't actually that good at keeping many parallel processes busy. | 12:12 | ||
12:23
MasterDuke left
12:56
TempIRCLogger joined
13:08
dogbert17 left
13:09
TempIRCLogger left
13:12
dogbert17 joined
13:16
MasterDuke joined
|
|||
MasterDuke | jnthnwrthngtn: is there any LHF re speshing new-disp? | 13:17 | |
lizmat | feels it's time for a bump | 13:23 | |
13:55
Kaiepi left,
Kaiepi joined
|
|||
jnthnwrthngtn | MasterDuke: Not really; the first thing that's really to be done is switching spesh back on for calls that are dispatched using the new dispatch mechanism, and that's a bit all-or-nothing | 14:01 | |
In the best case it's "adapt spesh logging and candidate selection to the new calling conventions" | |||
I hoped that p6sink becoming a dispatcher would be sorta LHF but I've got weird failures. | 14:02 | ||
oh gosh, I think I see what I've done, and that's subtle AF | 14:04 | ||
(For those curious: nqp::isconcrete will decont, but generating an isconcrete bytecode instruction directly will not, and this is relied upon in order to not sink things in containers.) | 14:07 | ||
Yay, with that twiddled it works. No more MAST::Call usage in Rakudo | 14:17 | ||
MasterDuke | nice | 14:21 | |
jnthnwrthngtn | Next to change over the last invoke instructions emitted anywhere by the QAST compiler | 14:22 | |
(those for invoking immediate blocks) | |||
dogbert17 | MasterDuke: there are still GC bugs lurking but perhaps Timo or Nine are fixing them as we speak :) | 14:23 | |
timo | not me, nope | 14:29 | |
nine | dogbert17: there are? How to reproduce? | 14:54 | |
14:55
linkable6 left,
evalable6 left
14:57
linkable6 joined
14:58
evalable6 joined
15:00
discord-raku-bot left
15:01
discord-raku-bot joined
|
|||
dogbert17 | nine: give me a sec ... | 15:11 | |
Geth | MoarVM/new-disp: 2bd41559a7 | (Jonathan Worthington)++ | lib/MAST/Nodes.nqp Remove MAST::Call and other invoke emitting bits These are no longer used by NQP nor Rakudo running on new-disp. |
15:21 | |
jnthnwrthngtn | Finally. That means almost everything is now running via the new dispatcher | 15:22 | |
The remaining exception being NQP's multiple dispatch | 15:23 | ||
nine | Niiiice :) | ||
dogbert17 | jnthnwrthngtn+++ | ||
jnthnwrthngtn | With no loss of passing tests/spectests, also :) | 15:27 | |
I think I might do the NQP multiple dispatch next, to get it out of the way. | 15:32 | ||
There's not that much after that beyond being able to do the rebootstrap and start clearing some stuff up in MoarvM | 15:34 | ||
sena_kun recalls a gist with a TODO list | 15:35 | ||
lizmat | and yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2021/07/19/2021-...uled-to-3/ | 15:39 | |
jnthnwrthngtn | lizmat++ | 16:09 | |
dogbert17 | nine: one GC bug seems to be hiding in t/spec/S05-substitution/subst.t | 16:34 | |
I'm running with MVM_GC_DEBUG=1 and a nursery of 24000 bytes | 16:36 | ||
16:51
evalable6 left,
linkable6 left
16:52
evalable6 joined
16:53
linkable6 joined
18:02
reportable6 left
18:05
reportable6 joined
18:21
sena_kun left
|
|||
nine | dogbert17: that could be a hard one | 18:30 | |
jnthnwrthngtn: I see an arg_info.arg.o pointing at fromspace. Same pointer is in the original ctx->arg_info.source[ctx->arg_info.map[arg_idx]]. But isn't source just the caller's work register list now? | 18:39 | ||
18:45
linkable6 left
18:47
linkable6 joined
|
|||
dogbert17 | nine: so you managed to repro it then :) | 19:11 | |
nine | Well..."it". That file yields several different failure modes. So far I have tracked down one definite bug. | 19:16 | |
Geth | MoarVM/new-disp: 1db0f3bb9c | (Stefan Seifert)++ | src/disp/boot.c Fix another possible access to fromspace in boot_code We also need to protect code from getting moved by the GC during the allocation in MVM_disp_program_record_track_arg. |
19:19 | |
nine | Now back to finding outdated pointers in registers | 19:23 | |
19:26
dogbert11 joined
19:27
dogbert17 left
|
|||
dogbert11 | Here's one of the failures: | 19:33 | |
#0 MVM_panic (exitCode=0, messageFormat=0x0) at src/core/exceptions.c:854 | |||
#1 0x00007ffff78acad6 in MVMHash_bind_key (tc=0x55555555aea0, st=0x5555555c8e48, root=0x5555556d3ca0, data=0x5555556d3cb8, key_obj=0x5555576b9450, value=..., kind=8) at src/6model/reprs/MVMHash.c:119 | |||
#2 0x00007ffff77dfb39 in MVM_args_slurpy_named (tc=0x55555555aea0, ctx=0x7ffff723c7a0) at src/core/args.c:1254 | |||
jnthnwrthngtn | nine: In simple cases, yes, source is the register list, however it may also be a chunk of memory in a flattening callstack record, or point into a capture if we are in the record phase, or if we can't use the args tail in a call it may also be a set of temporaries as written by the dispatch program | 19:42 | |
nine | That's quite a few places to check for proper GC marking | 19:44 | |
jnthnwrthngtn | Yes, although one can look at what kinds of records are on the call stack to get a hint | 19:51 | |
Curious. I switched NQP's multiple dispatch to new-disp, which probably was the last thing that spesh could get its hands on to optimize. Despite that, the setting build got a bit faster. | 19:58 | ||
lizmat smiles | 19:59 | ||
MasterDuke | jnthnwrthngtn: btw, have you seen github.com/Raku/nqp/pull/732 ? | 20:04 | |
jnthnwrthngtn | MasterDuke: Yes, but forgot to review it. I just did so; it looks fine to me, so stuck an approve on it. | 20:07 | |
20:07
dogbert17 joined
|
|||
MasterDuke | cool, thanks | 20:07 | |
lizmat | any reason not to bump MoarVM at the moment ? | 20:08 | |
20:09
dogbert11 left
20:11
linkable6 left
|
|||
MasterDuke | lizmat: don't think so | 20:11 | |
lizmat | ok, then will do so in a few mins | ||
20:11
linkable6 joined
|
|||
nine | jnthnwrthngtn: in dispatch_polymorphic don't we need to MVMROOT the collectables needed by the dispatch_initial call? Presumably those MVM_disp_program_run calls could lead to allocation, couldn't they? | 20:16 | |
jnthnwrthngtn | nine: In theory the dispatch programs should never allocate if they fail (and we try another one), although I've conservatively tweaked things elsewhere to root them anyway | 20:22 | |
nine | Oh...we do already | ||
No idea how I missed that | |||
Maybe it's time to call it a day after all :) | |||
jnthnwrthngtn | Sigh, this is silly, though probably also my fault. We have *two* different things called NQPRegex. | 20:23 | |
The NQP rebootstrap fails due to something in that area, but I think I should also call it a day | 20:25 | ||
MasterDuke | recovered from the 2nd shot? | 20:26 | |
jnthnwrthngtn | Well, this time around my arm didn't hurt at all. I rested all of Saturday, and on Sunday was like "ohh, I'm fine", walked ~6km, and felt totally knackered, and then woke up today wiped out. | 20:27 | |
So...I guess not quite, but I might have been if I'd been more careful yesterday :) | 20:28 | ||
Nicholas | rakudo new-disp gives me: | ||
+++ Compiling blib/Perl6/Optimizer.moarvm | |||
getcodeobj needs a code ref at gen/moar/Ops.nqp:351 (blib/Perl6/Ops.moarvm:) | |||
jnthnwrthngtn | Did you pull? | ||
Nicholas | so something not quite right | ||
I thought that I did | |||
fail! | |||
jnthnwrthngtn | Hm, it seems I pushed | ||
Nicholas | I did pull | ||
jnthnwrthngtn | ah, good | ||
ah, good | 20:29 | ||
Nicholas | my fail,t hat is | ||
I did pull | |||
I didn't check it out | |||
jnthnwrthngtn | oh :) | ||
Nicholas | ^d' | ||
jnthnwrthngtn | afk for now, maybe for the evening | ||
Nicholas | sleep well | 20:30 | |
21:11
evalable6 left,
linkable6 left
21:13
evalable6 joined,
linkable6 joined
21:22
MasterDuke left
21:48
jgaz joined
22:01
jgaz left
22:29
dogbert17 left,
dogbert11 joined
|