Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes. Set by lizmat on 24 May 2021. |
|||
00:03
reportable6 left
00:14
dogbert11 joined,
dogbert17 left
01:05
reportable6 joined
01:42
evalable6 joined
02:15
frost joined
03:21
Voldenet_ joined
03:22
Voldenet left,
Voldenet_ is now known as Voldenet
04:13
kjp left
04:15
kjp joined
04:41
linkable6 joined
05:41
evalable6 left,
linkable6 left
05:44
evalable6 joined
06:02
reportable6 left
06:04
reportable6 joined
07:18
unicodable6 left,
sourceable6 left,
releasable6 left,
bloatable6 left,
benchable6 left,
statisfiable6 left,
bisectable6 left,
notable6 left,
evalable6 left,
quotable6 left,
reportable6 left,
shareable6 left,
squashable6 left,
greppable6 left,
committable6 left,
nativecallable6 left,
tellable6 left,
coverable6 left,
squashable6 joined,
tellable6 joined
07:19
unicodable6 joined,
quotable6 joined,
sourceable6 joined,
bloatable6 joined,
greppable6 joined
07:21
shareable6 joined
07:42
linkable6 joined
|
|||
MasterDuke | i'm not sure why the patch in here is causing the error during rakudo's install gist.github.com/MasterDuke17/82c5a...d4538f4af7 | 07:43 | |
nine | MasterDuke: patch looks ok to me | 07:56 | |
Maybe it's unblocking code that contains an op with a buggy JIT implementation? We've had that before | 07:57 | ||
MasterDuke | good. and interestingly, i just re-ran the rakudo build 30s ago (only tried it once last night) and it completed fine | ||
oh wow! but half the test file in `make m-test` had fails | 07:58 | ||
though amusingly, i get an occasional passing TODO in t/01-sanity/99-test-basic.t | 07:59 | ||
(which made more sense when i looked and that test file tests Test, so the passing todo isn't real) | 08:01 | ||
i am on this branch github.com/MoarVM/MoarVM/pull/1584 though everything was ok before this most recent change. but i guess i'll check that patch against master to see if anything changes | 08:04 | ||
nine | I'd test which part of the patch causes the issues. It does 5 things: move those sanity checks from interp to inside the function, change the one existing JIT to a function call and add 3 more JITed ops. 5 parts you can easily test individually. | 08:06 | |
MasterDuke | that too | 08:07 | |
08:19
benchable6 joined,
releasable6 joined
08:20
committable6 joined,
notable6 joined
08:21
statisfiable6 joined,
reportable6 joined
|
|||
MasterDuke | seems to be ctxouter. if i don't jit it, spectests are fine. put it back in, instafails | 08:22 | |
(i.e., the other three ops are fine jitted) | 08:23 | ||
nine | I'd bet on ctxouter unblocking something broken | 08:33 | |
MasterDuke | this is using the spesh frame walker, there were a bunch of changes around it recently, right? | 08:37 | |
nine | yes | 08:38 | |
MasterDuke | does this look ok? | 08:57 | |
(rr) p *name | |||
$3 = {common = {header = {sc_forward_u = {forwarder = 0x0, sc = {sc_idx = 0, idx = 0}, st = 0x0}, owner = 1, flags1 = 0 '\000', flags2 = 2 '\002', size = 48}, st = 0x560ce45dbd50}, body = {storage = {blob_32 = 0x560ce4ce1cf0, | |||
blob_ascii = 0x560ce4ce1cf0 "::?CLASS\220^\021\232\n\177", blob_8 = 0x560ce4ce1cf0 "::?CLASS\220^\021\232\n\177", strands = 0x560ce4ce1cf0, any = 0x560ce4ce1cf0}, storage_type = 2, num_strands = 0, num_graphs = 8, | |||
cached_hash_code = 11759813466749862169}} | |||
looks to me like there's some junk on the end of the string | 09:03 | ||
but it utf8 encode seemingly without problem to "::?CLASS" | 09:04 | ||
nine | That's ok. Those blobs are just not 0 terinated. You only have to look at the first num_graphs characters | 09:06 | |
09:20
evalable6 joined
|
|||
jnthnwrthngtn | moarning o/ | 10:00 | |
lizmat | jnthnwrthngtn o/ | 10:04 | |
jnthnwrthngtn | Hope y'all have been having fun while I was away. At least, I see commits :) | 10:07 | |
lizmat | yeah, some commits and some issues as well | 10:12 | |
github.com/rakudo/rakudo/issues/4601 cost me some time to figure out | 10:13 | ||
:-( | |||
nine | Well, Sunday was definitely fun :) niner.name/pictures/PANO_20211031_124518.jpg Yesterday was a bit unproductive as body was rather unambiguously demanding rest | 10:18 | |
lizmat | wow | ||
10:19
coverable6 joined
|
|||
lizmat | definitely above the tree line :-) | 10:19 | |
10:19
bisectable6 joined
|
|||
jnthnwrthngtn | That's pretty :) | 10:22 | |
Just taken care of the last comment on github.com/rakudo/rakudo/pull/4591 | 10:25 | ||
nine | We first went up the Hagler 1669m and then continued to the Hoher Nock at 1963m. Our route: bit.ly/2ZI8yUW | ||
lizmat | jnthnwrthngtn: I guess this will need correction then: docs.raku.org/language/variables#P..._variables | 10:26 | |
There are three special variables that are available in every block: | |||
jnthnwrthngtn | Yes, that's a bit confusing. Only $_ is fresh per block (and then it's *bound* to what is in the outer scope, so you'd need to make a binding, not assignment) | 10:29 | |
lizmat | yeah, working on fixing the doc | ||
jnthnwrthngtn | lizmat++ | ||
lizmat | m: sub a() { dd MY::.keys }; a # what about $Ā¢ ? | 10:30 | |
camelia | ("\$Ā¢", "\$_", "\$/", "\$!").Seq | ||
lizmat | implementation detail ? | 10:31 | |
nine | It appears in spec tests | 10:33 | |
lizmat | ah, interesting | ||
but only twice | 10:34 | ||
jnthnwrthngtn | Only declared inside of rules/regexes | 10:35 | |
MasterDuke | so the exception i'm getting is thrown by MVM_frame_lexical_primspec, because MVM_get_lexical_by_name returns MVM_INDEX_HASH_NOT_FOUND. which is correct, the name it's looking for isn't there. but i don't know why | 10:41 | |
dogbert11 | there are two (?) dispatchy bugs lurking about as well. They only show up under the toughest conditions, i.e. MVM_SPESH_NODELAY=1, MVM_SPESH_BLOCKING=1 and a very small nursery. | 10:42 | |
Geth | MoarVM/new-special-return: 17 commits pushed by (Jonathan Worthington)++ review: github.com/MoarVM/MoarVM/compare/b...760b58598e |
||
lizmat | even though $Ā¢ is tested for in roast, it feels like an implementation detail | 10:43 | |
dogbert11 | Messages are 'Can only use manipulate a capture known in this dispatch' and 'Already set resume init args for this dispatcher' respectively | ||
The second one has a large chance of being GC related, gist.github.com/dogbert17/9c648c7e...c6e046b573 | 10:45 | ||
jnthnwrthngtn | lizmat: I assure you if it was an implementation detail I woulda picked a different name ;-) | ||
I'm quite sure it appears in S05 | |||
Yup, 12 mentions in raw.githubusercontent.com/perl6/sp...-regex.pod | 10:46 | ||
How useful it is, is another question :) | |||
Especially after the Cursor/Match unification | 10:47 | ||
nine | dogbert11: FWIW I've yet again failed to reproduce. This time the range.t issue | 10:48 | |
lizmat | jnthnwrthngtn: ok, I will *not* document it :-) | 10:49 | |
dogbert11 | nine: how is that possible :) what nursery size are you using? | ||
also, are you on master or your nativecall branch? | 10:51 | ||
Geth | MoarVM/faster-frame-alloc: fcddd1210d | (Jonathan Worthington)++ | src/core/frame.c Split specialized and unspecialized frame alloc This allows for us to eliminate more checks than the compiler can, and also seems - at least on some compilers - to make it possible to inline into MVM_frame_dispatch. One further opportunity is that we can fold two calls to memset into one in the common on-callstack allocation case of specialized frames. |
10:58 | |
MoarVM/faster-frame-alloc: 9903a543dc | (Jonathan Worthington)++ | src/core/frame.c Elide frame initialization check with spesh cand If we are at the point that we are making a call where we have already identified a specialization of a frame, then clearly that frame has been invoked before. Thus move the frame initialization check into the path of the non-specialized invocation. |
|||
nine | dogbert11: on master with 4K nursery while MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1 ./rakudo-m -Ilib t/spec/S02-types/range.t ; do :; done | 11:01 | |
dogbert11: also tried MVM_GC_DEBUG 3 | |||
dogbert11 | I failed to mention that I actually run that test with a 20k nursery and MVM_GC_DEBUG=1 | 11:11 | |
dogbert11 hides | |||
11:19
evalable6 left,
linkable6 left
11:20
nativecallable6 joined
|
|||
Geth | MoarVM: jnthn++ created pull request #1589: Some small optimizations for frame allocation |
11:20 | |
11:22
evalable6 joined
|
|||
lizmat | jnthnwrthngtn : github.com/Raku/doc/commit/c867f164ba | 11:22 | |
jnthnwrthngtn | grmbl, seems I'll have to go searching for why github.com/MoarVM/MoarVM/pull/1581 causes `make test` failures on CI | ||
(Which I've yet to see locally) | |||
MasterDuke | two of them are using clang, do you every try that locally? | 11:23 | |
jnthnwrthngtn | I thought this machine was builidng with clang but now I see it's GCC :) | 11:25 | |
lizmat: I wonder whether "in each routine" would be better than "in each sub / method" (in that it's more complete; for example, it's true of submethod but also regex/token/rule routines too) | 11:26 | ||
lizmat | ok... I was wondering about that, but yeah | ||
jnthnwrthngtn | lizmat: An alternative solution to "declare your own $/" would also be "declare a sub instead" | 11:27 | |
lizmat | map: sub () { | ||
you mean? | |||
jnthnwrthngtn | Yes | ||
Well, it'd have to be map: sub ($_) { | |||
lizmat | ah, yes | 11:28 | |
jnthnwrthngtn | Dunno to what degree this is better, but it's maybe a nice alternative | ||
Otherwise it looks good to me | 11:29 | ||
lizmat | I feel the "my $/;" is more self-documenting | ||
it's easy to lose track of why a `sub` was used there, and then a subsequent refactorer might think: | |||
why a sub? that's not needed! | |||
and bang, your program suddenly becomes unreliable | 11:30 | ||
jnthnwrthngtn | True | ||
lizmat | so I feel like not documenting that option :-) | ||
jnthnwrthngtn | MasterDuke: Sadly, building with clang did not magically surface the error | 11:34 | |
Hmm | 11:35 | ||
Type check failed for return value; expected CompUnit:D but got Method (method sink (Mu: *%_...) | 11:36 | ||
at /home/vsts/work/1/rakudo/t/02-rakudo/04-diag.t:3 | |||
The test varies, but the error seems to reliably be that one | 11:37 | ||
oh, interesting...actually one of the 3 is a SEGV in t/04-nativecall/08-callbacks.t | 11:43 | ||
Which I can't reproduce either :( | 11:46 | ||
nine: Did you ever manage to reproduce the issues seen in CI locally for github.com/MoarVM/MoarVM/pull/1581 ? | 11:47 | ||
The PR actually does two things: replace the special return mechanism, and then do some follow-up optimizations. I guess I'll try a version of the PR that only has the first half of it. | 11:48 | ||
Geth | MoarVM/new-special-return-only: 07be913d17 | (Jonathan Worthington)++ | src/core/callstack.h Add temporary define for migrating Rakudo extops |
11:52 | |
MoarVM/new-special-return-only: 03ff8e7a28 | (Jonathan Worthington)++ | src/gc/finalize.c Avoid a NULL dereference when running finalizers |
|||
MoarVM: jnthn++ created pull request #1590: Migrate special return to callstack |
11:53 | ||
lizmat | jnthnwrthngtn: fwiw, your latest rakudo merge seems to negatively affect test-t (1 or 2% or so) | 12:00 | |
12:02
reportable6 left
|
|||
jnthnwrthngtn | lizmat: Curious, I'd have really have expected much of an effect there. | 12:10 | |
For the things it helps, the win is a lot greater than 1 or 2%. | |||
lizmat | there's that :-) | 12:12 | |
just giving data points, not arguing :-) | |||
jnthnwrthngtn | For example, with Agrammon we saw a 9% slowdown after new-disp. With this, it became a few percent improvement over pre-new-disp | 12:13 | |
lizmat | works for me :-) | ||
jnthnwrthngtn | Still, I wonder what the mechanism is that causes a change in test-t. | 12:14 | |
Does the effect appear in test-t-20? | |||
(That is, has this change made warm-up a bit worse?) | 12:15 | ||
dogbert11 | nine: before you give up, can you try this golf on normal settings, i.e. no flags set and no changes to the GC flags? gist.github.com/dogbert17/af79a02e...cb4d9c6711 | 12:16 | |
jnthnwrthngtn | Sigh, turns out that github.com/MoarVM/MoarVM/pull/1590 is even more faily | ||
lunch & | 12:17 | ||
12:22
linkable6 joined
|
|||
lizmat | hmmm... maybe it's just a matter of slightly increased startup time ? | 12:30 | |
nine | jnthnwrthngtn: no, my reproduction success rate has been abysmal in the past week | 13:02 | |
13:03
reportable6 joined
|
|||
jnthnwrthngtn | lizmat: Maybe, that's why I asked about test-t-20 (which would be less sensitive to that) | 13:21 | |
timo | jnthnwrthngtn: i think the last faster-frame-alloc commit prevents frames that have already run from hitting the instrumentation barrier after profiling has been turned on, like when you profile a program that evals, for example, a bunch of frames in the compiler would already be specialized | 13:28 | |
lizmat | I see an improvement there for sure: 14.3 seconds -> 13.8 seconds | ||
for test-t-20 | |||
jnthnwrthngtn | timo: It's not that the frames have been specialized, but rather that they are called via specialization linking | 13:30 | |
timo | ah, so really only from another specialized frame already | 13:35 | |
so there'd always be an outer that's invoked normally and can hit the barrier | |||
well, not "outer" | |||
nine | caller? | 13:36 | |
lizmat | jnthnwrthngtn: fwiw, I don't see much of a difference with test-t --race | 13:37 | |
perhaps that just doesn't run long enough | |||
jnthnwrthngtn | lizmat: Perhaps so | ||
13:42
MasterDuke left
|
|||
lizmat | afk for a bit& | 13:47 | |
14:09
MasterDuke joined
14:40
vrurg left
14:42
vrurg joined
15:00
frost left
|
|||
dogbert11 | nine: are you on a train getting home from $work? | 16:06 | |
16:12
evalable6 left,
linkable6 left,
evalable6 joined
|
|||
timo | he took a midnight train --> Any $ where ... | 16:12 | |
lizmat | jnthnwrthngtn: comparing logs.liz.nl/raku-dev/2021-10-29.html#17:35 with logs.liz.nl/raku-dev/2021-11-02.html#14:43 shows test-ts are slower :-( | 16:13 | |
dogbert11 | timo: can I trick you to run gist.github.com/dogbert17/af79a02e...cb4d9c6711 in order to see if you can repro the error we discussed yesterday? | ||
or anyone else for that matter :) | 16:17 | ||
MasterDuke | `Already set resume init args for this dispatcher` | 16:18 | |
dogbert11 | MasterDuke++, this means I'm not insane and the problem is real | 16:19 | |
perhaps jnthnwrthngtn can figure out what's wrong just by looking at the gist | 16:22 | ||
timo | i also get the error message locally | 16:39 | |
jnthnwrthngtn | lizmat: Hm, didn't you post a number earlier showing a faster test-t-20? | 16:51 | |
lizmat | that was on my local machine | ||
nine | dogbert11: I've just come home from work | 16:53 | |
jnthnwrthngtn | lizmat: OK, let's see tomorrow's numbers to make sure it's not just that the machine they were posted from was otherwise busy | 16:54 | |
lizmat | yup | 16:55 | |
jnthnwrthngtn | dogbert11: I can reproduce it. It goes away with MVM_SPESH_DISABLE=1. | ||
dogbert11: But doesn't crash but instead changes behavior (loads of warnings) with MVM_SPESH_BLOCKING=1 | |||
nine | dogbert11: FWIW I can finally reproduce it as well with your golf | 16:57 | |
jnthnwrthngtn: could it be because finalizers are now run at an inopportune time? | 16:58 | ||
jnthnwrthngtn | nine: That ain't merged yet :) | 17:00 | |
Actually I was on the branch that changed that and went back to MoarVM master to check it | |||
nine | It also occurs with finalizers disabled | ||
jnthnwrthngtn | Turns out the warnings are there too | ||
It's just that when it crashes it does so before it has time to spit out a load of them, it seems | |||
oh, it's not stable; it behaves differently run to run | 17:02 | ||
dogbert11 | suddenly everyone can repro, cool :) | 17:04 | |
jnthnwrthngtn | Gah, disabling hash randomization doesn't get it completely stable | ||
It spesh bissects to Spesh of '' (cuid: 3, file: x.raku:14) | 17:08 | ||
timo | any deopts happen right before explosion? | 17:10 | |
jnthnwrthngtn | uh, this makes no sense, how on earth do we end up in nextwith? | ||
at gen/moar/BOOTSTRAP/v6c.nqp:6545 (/home/jnthn/dev/rakudo/blib/Perl6/BOOTSTRAP/v6c.moarvm:) | |||
from SETTING::src/core.c/control.pm6:142 (/home/jnthn/dev/rakudo/blib/CORE.c.setting.moarvm:nextwith) | |||
from x.raku:14 (<ephemeral file>:) | |||
There is no nextwith in this whole file | |||
nine | neither in Test.pm6 | 17:11 | |
jnthnwrthngtn | Nor...right | ||
nine | err... .rakumod | ||
lizmat | .oO( if it walks like a duck, and quacks like a duck, and it still isn't a duck, the name is wrong ) |
||
17:11
nine left
17:13
linkable6 joined
17:14
nine joined
17:15
nine left,
nine joined
|
|||
jnthnwrthngtn | Anybody repro this? | 17:16 | |
$ ./rakudo-m -e 'for ^10000 { <0+0i> ~~ 1..10 }' | |||
Segmentation fault (core dumped) | |||
lizmat | yup | 17:17 | |
segfaults for me | |||
MasterDuke | yep, in setup_translated_resumption | ||
jnthnwrthngtn | OK, that's a better golf them | ||
*then | |||
lizmat | need about 2200 iterations to get a reliable segfault | 17:18 | |
MasterDuke | (gdb) p *ri | ||
$2 = {dp = 0x0, deopt_idx = 0, res_idx = 0, state_register = 0, init_registers = 0xa5} | |||
jnthnwrthngtn | Still there with JIT disabled, which makes the gdb stacktrace a little easier | 17:19 | |
dogbert11 | jnthnwrthngtn: I get a SEGV | ||
disappears with MVM_SPESH_INLINE_DISABLE=1 | 17:20 | ||
lizmat | confirmed | 17:22 | |
jnthnwrthngtn | Yup, that also correlates with where we are in find_internal | ||
dogbert11 is starting to believe that this bug is in BIG trouble :) | 17:23 | ||
jnthnwrthngtn | oh duh | 17:24 | |
This looks silly | |||
Have a probable fix, testing | 17:27 | ||
Geth | MoarVM: 9380981185 | (Jonathan Worthington)++ | src/disp/resume.c Fix thinko in inline dispatch resumption search We are doing a forward search through the resumptions (because they were emitted in reverse - most nested dispatcher first - already at dispatch program compilation time). Thus the index should be incremented to reach the end, not decremented. Could lead to out-of-bounds reads or just an attempt to resume the wrong dispatcher depending on how it came up. |
17:33 | |
jnthnwrthngtn | I really don't understand why CI is such a mess on github.com/MoarVM/MoarVM/pull/1590 but I can't produce any of these issues locally | 17:38 | |
lizmat | jnthnwrthngtn: feels like it's time for a bump | 17:40 | |
MasterDuke | aw. ^^^ fix doesn't also fix my lexical not found on my jit branch | 17:42 | |
jnthnwrthngtn | Does anybody confirm it fixes the problem on their machine? | ||
MasterDuke | does for me (the complex example) | ||
jnthnwrthngtn | OK, good | ||
dogbert11 | jnthnwrthngtn++ | 17:45 | |
jnthnwrthngtn | lizmat: Given it seems the fix is confirmed to help, can do | 17:46 | |
lizmat | ok, will do | ||
jnthnwrthngtn | If anybody would be able to try and reproduce the Rakudo make test failures with 1590 it'd be appreciated. I can't at all. | ||
dogbert11 | fix works splendidly | 17:47 | |
jnthnwrthngtn | (Build the MoarVM branch new-special-return-only; you'll need to rebuild Rakudo also due to extops changes) | ||
dogbert11 | will do | 17:48 | |
MasterDuke | t/02-rakudo/08-slangs.t ......................................... Dubious, test returned 255 (wstat 65280, 0xff00)\ | 17:56 | |
but i had to have a spectest running at the same time | 17:57 | ||
jnthnwrthngtn | MasterDuke: OK, at least somebody can repro something | ||
With spesh blocking + a small nursery I managed to get one spectest to fail | |||
aha, and the small nursery is significant | 17:58 | ||
18:02
reportable6 left
|
|||
jnthnwrthngtn | OK, this failure is seemingly to do with finalization | 18:31 | |
I'm not really sure what makes the timing of it unfortunate enough to cause bother | |||
dogbert11 | perhaps a good dinner might clarify things? | 18:39 | |
jnthnwrthngtn | Yeah, cooking a dal at the moment | 19:11 | |
MasterDuke | now this is interesting. on my branch where i'm jitting ctx(caller|outer)(skipthunks) and i get tons of those `Frame has no lexical with name ...` because of ctxcallerskipthunks, a test and spectest both pass if i disable the expr jit. even though none of those four ops have templates! | 19:18 | |
so yeah, it seems like the jitting of ctxcallerskipthunks is uncovering some other problem | 19:19 | ||
dogbert11 | interesting, the new-special-return-only branch has tests failing with the other dispatchy error, the one nine unfortunately didn't manage to repro | 19:28 | |
using an 8k nursery, t/spec/S06-advanced/wrap.rakudo.moar and t/spec/S12-methods/defer-next.t fail with 'Can only use manipulate a capture known in this dispatch' | 19:29 | ||
MasterDuke | ah, the jitting of ctx(caller|outer)(skipthunks) causes problems even if i just cherry-pick that commit onto master. new error now though, `No such method 'exception' for invocant of type 'X::AdHoc'` | 19:32 | |
lizmat | hmmmm... for the first time in a long time, I got a hang in t/spec/S17-promise/nonblocking-await.t | 19:40 | |
(this is on master everything) | |||
20:03
reportable6 joined
20:12
sena_kun left
20:16
sena_kun joined
|
|||
Geth | MoarVM/new-special-return: 5dd58a5db3 | (Jonathan Worthington)++ | src/gc/finalize.c Don't lose exception handler results Invoking finalizers could end up replacing tc->last_handler_result, leading to missing or incorrect values being observed for that after finalizers had run. Don't run finalizers when there is such a risk. |
21:06 | |
jnthnwrthngtn | Here's hoping that might fix the CI failures I couldn't reproduce. | 21:10 | |
dogbert11 | how does that branch differ from new-special-return-only | 21:16 | |
nine | new-special-return-only is only a part of the branch with the purpose of narrowing down the changes that cause the failure | 21:23 | |
jnthnwrthngtn: looking at that commit I wonder if that problem could have appeared before as well? Or manifested in a different way (like not calling the finalizer when there's a last_handler_result) | 21:24 | ||
dogbert11 | nine: new-special-return-only contains errors but perhaps jnthnwrthngtn fixed those in the other branch | 21:26 | |
jnthnwrthngtn | nine: I did wonder the same; I don't know why the previous approach wouldn't have been vulnerable to the same issue | 21:27 | |
Sigh. The PR still fails. With things I can't reproduce. | 21:28 | ||
Not sure what to do about that. | |||
Geth | MoarVM/new-special-return-only: 733dbd3293 | (Jonathan Worthington)++ | src/gc/finalize.c Don't lose exception handler results Invoking finalizers could end up replacing tc->last_handler_result, leading to missing or incorrect values being observed for that after finalizers had run. Don't run finalizers when there is such a risk. |
21:30 | |
jnthnwrthngtn | Let's see how things look in this one also | ||
Geth | MoarVM/new-special-return: c77a89fae6 | (Jonathan Worthington)++ | src/core/callstack.c Don't run finalizers at all To see if that is to blame for the `make test` failures |
21:32 | |
jnthnwrthngtn | On the upside, github.com/MoarVM/MoarVM/pull/1589 has passed CI | 21:33 | |
dogbert11 | the 'Can only use manipulate a capture known in this dispatch' problem is still present after applyiong your latest commit | 21:39 | |
jnthnwrthngtn | dogbert11: That exists on master too, yes? | 21:40 | |
dogbert11 | yes, but it has been very difficult to repro there the last few days | 21:41 | |
running './rakudo-gdb-m -Ilib t/spec/S06-advanced/wrap.rakudo.moar' with MVM_GC_DEBUG=3 leads to a SEGV | 21:42 | ||
jnthnwrthngtn | Which branch? | ||
dogbert11 | new-special-return-only | 21:43 | |
jnthnwrthngtn | And with default settings, or a small nursery? | ||
Well, other than the one you mentioned | |||
dogbert11 | the only changes are MVM_GC_DEBUG=3 and building with --no-optimize | ||
#0 log_parameter (tc=0x55555555ae30, cid=192924, arg_idx=0, param=0x55555563f5e8) at src/spesh/log.c:95 | 21:44 | ||
#1 0x00007ffff79449cd in MVM_spesh_log_entry (tc=0x55555555ae30, cid=192924, sf=0x5555579948b0, args=...) at src/spesh/log.c:130 | |||
#2 0x00007ffff7829b5a in MVM_frame_dispatch (tc=0x55555555ae30, code=0x5555579edfb8, args=..., spesh_cand=-1) at src/core/frame.c:552 | |||
#3 0x00007ffff790df5f in run_resume (tc=0x55555555ae30, record=0x7ffff722da38, disp=0x555557a89890, capture=0x55555563f5e8, thunked=0x7fffffffc048) at src/disp/program.c:618 | |||
#4 0x00007ffff7917158 in MVM_disp_program_record_end (tc=0x55555555ae30, record=0x7ffff722da38, thunked=0x7fffffffc048) at src/disp/program.c:2793 | |||
running with that flag set to 3 is slooow | 21:46 | ||
jnthnwrthngtn | uh, yes, I'm still waiting to see if I can repro it | 21:47 | |
21:47
linkable6 left,
evalable6 left
|
|||
dogbert11 | would be nice, it's an elusive beast | 21:47 | |
to be fair, I had the nursery set to 20k but I'm under the impression that this value is ignored when MVM_GC_DEBUG=3 | 21:49 | ||
could I be wrong about that perhaps | |||
jnthnwrthngtn | No, I think it collects every allocation | 21:51 | |
dogbert11 | yeah, but perhaps it tries to move nurseries around every allocation ? | 21:52 | |
four megs is more than 20k but perhaps I'm spouting nonsense here | 21:53 | ||
jnthnwrthngtn | OK, github.com/MoarVM/MoarVM/pull/1581 fails even without running any finalizers | ||
dogbert11 | and using a 20k nursery make the program run faster than a four meg nursery when MVM_GC_DEBUG=3, at least on my system | 21:55 | |
jnthnwrthngtn | # expected a match with: /'(1 2 3 4 5 6 7 8 9 10)' .* 'test is good'/ | ||
# got: "(1 2 3 4 5 6 7 8 9 10)\nUnhandled lexical type 'num32' in lexprimspec\n\n" | |||
dogbert11 | oh, you got something | ||
jnthnwrthngtn | On CI | ||
dogbert11 | ah | ||
jnthnwrthngtn | OK, here goes with adding one commit at a time to the PR to see where it breaks... | 21:57 | |
dogbert11 | well the SEGV show up cinsistently on my system | 21:58 | |
jnthnwrthngtn | That one is still running yet | ||
ah | |||
yes, just got the segv | 21:59 | ||
dogbert11 | yessssssss :) | ||
jnthnwrthngtn | (gdb) p *param | ||
$2 = {header = {sc_forward_u = {forwarder = 0xefefefefefefefef, sc = {sc_idx = 4025479151, | |||
idx = 4025479151}, st = 0xefefefefefefefef}, owner = 4025479151, flags1 = 239 '\357', | |||
flags2 = 239 '\357', size = 61423}, st = 0xefefefefefefefef} | |||
Well, that's not looking good. | 22:00 | ||
dogbert11 | 0xefefefefefefefef looks suspicious | ||
src/gc/orchestrate.c-#if MVM_GC_DEBUG >= 3 | 22:01 | ||
src/gc/orchestrate.c: memset(other->nursery_fromspace, 0xef, other->nursery_fromspace_size); | |||
src/gc/orchestrate.c-#endif | |||
MasterDuke | also in src/core/threadcontext.c | 22:02 | |
dogbert11 | but this probably explains why running with a 20k nursery is faster | ||
jnthnwrthngtn | Indeed, it is rather faster that way | 22:21 | |
dogbert11: You could reproduce this on master also, yes? | |||
Uhh. github.com/MoarVM/MoarVM/pull/1590 has presently only one commit that adds what should be only dead code. And...fails CI too | 22:25 | ||
oh, it's a git checkout failure...that ain't my code :) | 22:26 | ||
Let's re-run that one. | |||
lizmat | .oO( it's alive, but not as we know it! ) |
||
MasterDuke | just a git error | ||
Geth | MoarVM: 4d145b4651 | (Jonathan Worthington)++ | src/disp/program.c Add missing GC mark of resume capture |
22:27 | |
jnthnwrthngtn | dogbert11: ^^ fixes it | ||
(Committed in master 'cus it's wrong there too...) | 22:28 | ||
dogbert11 | yay | 22:30 | |
jnthnwrthngtn | No matter how many times I click re-run on that git failed test, it doesn't re-run... | 22:34 | |
MasterDuke | i think it doesn't actually restart it until the others are finished | 22:35 | |
22:35
sena_kun left
|
|||
jnthnwrthngtn | ah, ok | 22:36 | |
22:38
sena_kun joined
|
|||
dogbert11 | jnthnwrthngtn: the problem on master was reported as github.com/MoarVM/MoarVM/issues/1580 | 22:39 | |
perhaps it can be closed | |||
jnthnwrthngtn | Hm, that's a different (although very possibly related) failure mode to the one I just fixed | 22:40 | |
dogbert11 | yeah, the stacktrace doesn't look quite the same but it did look the same if I ran the SEGV example with an 8k nursery and MVM_GC_DEBUG=1 | 22:42 | |
I guess it could be two different bugs, let me investigate, I'll apply your latest master commit to new-special-return-only and give it a go | 22:43 | ||
jnthnwrthngtn | MasterDuke: Yes, seems I can queue a re-run for the git failure now it did the others; thanks for the tip | ||
MasterDuke | np. took me quite a while to realize that so figured i'd share the wealth | 22:44 | |
22:50
linkable6 joined,
evalable6 joined
22:54
sena_kun left
|
|||
dogbert11 | the 'Can only use manipulate a capture known in this dispatch' was either fixed by the latest commit or it has gone into hiding, can't repro any longer | 22:56 | |
Geth | MoarVM/new-special-return-only: 5dfbc68774 | (Jonathan Worthington)++ | src/core/loadbytecode.c Migrate loadbytecode usage of special return |
22:57 | |
jnthnwrthngtn | hmm...pushing that doesn't seem to have triggered the azure checks... | 22:59 | |
Geth | MoarVM/new-special-return-only: 9b499f94fe | (Jonathan Worthington)++ | src/core/loadbytecode.c Migrate loadbytecode usage of special return |
23:00 | |
MasterDuke | it looks like github might be having problems | ||
jnthnwrthngtn | Oh | ||
I wondered if it was somehow trying to optimize it out because it saw that hash sometime before now... | 23:01 | ||
(What I just pushed is just ammending it with a new date so it's another sha-1) | |||
Which has immediately triggered the CI | |||
I'm tired, will continue trying to CI-bissect this tomorrow, or if my office machine will repro it maybe. | 23:06 | ||
23:36
sena_kun joined
23:53
sena_kun left
23:57
squashable6 left
23:58
squashable6 joined
|