github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm Set by AlexDaniel on 12 June 2018. |
|||
Geth | MoarVM: MasterDuke17++ created pull request #1264: Free malloced data before leaving the function |
00:06 | |
MoarVM: patrickbkr++ created pull request #1265: Always handle proc exec arguments verbatim on Windows |
00:47 | ||
00:53
patrickb61 joined
00:55
patrickb left
01:05
patrickb61 left
06:04
reportable6 joined
08:28
AlexDaniel left
|
|||
Geth | MoarVM: 24f663cf51 | (Daniel Green)++ | src/6model/serialization.c Free malloced data before leaving the function |
08:55 | |
MoarVM: 4f08d803f4 | niner++ (committed using GitHub Web editor) | src/6model/serialization.c Merge pull request #1264 from MasterDuke17/cleanup_serialization_output Free malloced data before leaving the function |
|||
10:04
sena_kun joined
10:51
Altai-man_ joined
10:53
sena_kun left
11:42
lizmat joined
11:58
lizmat left
12:07
lizmat joined
12:18
zakharyas joined
|
|||
lizmat | would now be a good time to do a bump ? | 12:29 | |
Altai-man_ | lizmat, good enough | 12:36 | |
some nice fixes are there, at the very least | |||
lizmat | ok, will do | 12:42 | |
12:52
sena_kun joined
12:53
Altai-man_ left
12:57
zakharyas left
|
|||
MasterDuke | jnthn, nine, timotimo, brrt: sena_kun thinks github.com/MoarVM/MoarVM/commit/16...55710b6f7d is responsible for the appveyor failures. i don't have a windows machine to test on (or the confidence i could diagnose even if so). anybody mind giving that commit a second look? | 13:19 | |
sena_kun | I'll now explain my reasoning for this suspect... | 13:20 | |
nine | MasterDuke: I've given it a close look and the expr JITed versions, too and couldn't find anything wrong | ||
MasterDuke | hm | ||
and thanks | 13:21 | ||
sena_kun | the bump which bring in the issue was from 44-g04005cf43 to 47-g3c3ad0678, so on 040005cf43 it worked, but on 3c3ad0678 already not. In between there are three commits: 2252a95df953d7182e2a380dffdfd50e55380bab which is a revert of some fix, 162b68b6b676318901c7baf6fd4c4955710b6f7d which does something with asm and the third is its merge commit. | 13:22 | |
linkable6 | (2020-03-16) github.com/MoarVM/MoarVM/commit/3c3ad0678a Merge pull request #1259 from MasterDuke17/jit_nextdispatcherfor | ||
(2020-03-15) github.com/MoarVM/MoarVM/commit/162b68b6b6 JIT nextdispatcherfor | |||
jnthn | MasterDuke: Probably I can try a Windows build and see if I can reproduce it a bit later today | 13:23 | |
sena_kun | what I also found suspicious is that this commit added not only case MVM_OP_nextdispatcherfor but also MVM_OP_takenextdispatcher, so maybe it's MVM_OP_takenextdispatcher introduced before who is guilty. | ||
jnthn | That's also possible | ||
If it only breaks on Windows I kinda suspect a calling conventions violation (they're different) | 13:24 | ||
sena_kun | the "easy" solution is to just revert them and see if that helps, but I am reluctant to do this because we'll need to do not just a revert, but revert, bump, bump, and if that's pointless then it's for nothing. | ||
And revert it back and bump and bump, then maybe just trying to look for a real reason is cheaper instead of doing a hurry workaround (as we are not in a real critical hurry). | 13:25 | ||
jnthn | Well. | 13:26 | |
Updating submodules .................................... List form of pipe open not implemented | |||
MoarVM won't even configure on my machine. | |||
(On Windows, that is.) | 13:27 | ||
lizmat | yikes | 13:28 | |
jnthn | So much more "let's leave a build to happen while I lunch"... | 13:29 | |
Well, bbl | |||
nine | BUT | ||
There is an error in the lego JIT implementation for MVM_OP_takenextdispatcher which gets unlocked by github.com/MoarVM/MoarVM/commit/16...c6299R1816 | 13:30 | ||
dogbert17 | FWIW, the DU error uncovered by nwc10++ yesterday is caused by github.com/MoarVM/MoarVM/commit/6b...870f5c3ae2 | ||
perhaps that is what nine is saying above :) | 13:31 | ||
Geth | MoarVM: 3438ad2a40 | (Stefan Seifert)++ | src/jit/x64/emit.dasc Fix copy pasta in lego JIT implementation of takenextdispatcher |
14:43 | |
lizmat | nine++ | 14:47 | |
sena_kun | lizmat, bump time? | ||
lizmat | fine by me :-) | ||
nine? | |||
nine | sure | ||
Lets hope this fixes the Windows issue | 14:48 | ||
lizmat | ok, coming up | ||
nine: feels like the WIndows issue is a problem in the build process | |||
sena_kun | lizmat, no | ||
lizmat | no? ok, in that case... whee! | ||
sena_kun | lizmat, jnthn has said it's likely a segfault looking at different logs of failures | ||
14:51
Altai-man_ joined
14:53
sena_kun left
|
|||
Altai-man_ | lizmat, do you plan to bump nqp after travis green or? | 15:03 | |
lizmat | sorry, got distracted while spectesting :-) | ||
bumped now | 15:04 | ||
Altai-man_ | lizmat++ | ||
when probably we'll be able to have a release tonight, or maybe closer to tomorrow as blin is required | |||
lizmat | looking forward to it! | 15:05 | |
Altai-man_ | s/when/then/ | ||
dogbert17 | unsurprisingly the DU error is still present | 16:00 | |
16:52
sena_kun joined
16:53
Altai-man_ left
18:02
zakharyas joined
18:51
Altai-man_ joined
18:54
sena_kun left
20:28
lucasb joined
20:52
sena_kun joined
20:54
Altai-man_ left
22:24
patrickb joined
22:51
Altai-man_ joined
22:54
sena_kun left
23:17
AlexDaniel joined,
AlexDaniel left,
AlexDaniel joined
23:21
zakharyas left
23:22
Kaiepi left
23:23
Kaiepi joined
|
|||
MasterDuke | i'm curious, what's the optimization possible here? github.com/MoarVM/MoarVM/blob/mast...#L922-L923 | 23:27 | |
23:28
Altai-man_ left
|
|||
timotimo | can use memcpy if the sizes and signednesses match, can use a tight loop that likely gets compiled to vectorized instructions if it's two MVMArray objects | 23:29 | |
since if the other object is an MVMArray too, you can access the other object's body->elems | 23:30 | ||
MasterDuke | hm, still have to copy either way? | 23:33 | |
timotimo | i don't think you can get around it, yeah | 23:34 | |
MasterDuke | a large chunk of the allocations that happen during a rakudo compile are from asplice | 23:37 | |
timotimo | do you have stack traces to go with it, or at least the line numbers / filenames or whatever to go with them? | 23:38 | |
MasterDuke | 592750 at gen/moar/stage2/QAST.nqp:7015 (/home/dan/Source/perl6/install/share/nqp/lib/QAST.moarvm:add) | ||
593411 at gen/moar/stage2/QAST.nqp:7271 (/home/dan/Source/perl6/install/share/nqp/lib/QAST.moarvm:write_string_heap) | |||
github.com/Raku/nqp/blob/master/sr...2294-L2325 | 23:39 | ||
github.com/Raku/nqp/blob/master/sr...2550-L2553 | 23:40 | ||
i can send you the heaptrack file if you'd like | 23:44 | ||
that has the moarvm stack traces | 23:45 | ||
timotimo | moarvm stacktraces ar not quite as interesting i think? | 23:46 | |
if you're only recording asplice calls | 23:47 | ||
huh that add function is interesting; checking whether a string is utf8 by going char by char | |||
MasterDuke | i only recorded asplice calls after seeing it at the top of the heaptrack data | 23:48 | |
timotimo | what are those numbers for then? for the add and write_string_heap functions? | ||
MasterDuke | oh, number of times called | ||
i added `fprintf(stderr, "%s\n", MVM_exception_backtrace_line(tc, tc->cur_frame, 0, *(tc->interp_cur_op)));` right before the set_size_internal() in asplice() | 23:49 | ||
timotimo | i assume this comes from write_* method calls? | 23:50 | |
i mean, those already pre-size the buffer, so the set_size_internal ought to not do anything in many of these cases? | |||
MasterDuke | well, heaptrack says 4.3gb allocated by set_size_internal in asplice when building CORE.c | 23:51 | |
timotimo | but we're not outputting only the calls that actually do an allocation in set_size_internal | 23:52 | |
oh, or is that inside a conditional? | |||
that is line 990 of VMArray.c? | 23:53 | ||
MasterDuke | yep | ||
timotimo | hm actually | ||
we can totally have a variant of set_size_internal that doesn't have to worry about nulling out slots | 23:54 | ||
that we can use when we know we're going to write over them anyway | |||
the initial setelems for $foo and then 0 will have nulled everything out already | |||
then we're just pasting data in there from start to end | |||
write_buf at least could always use that, asplice perhaps as well | 23:55 | ||
it'll not be terribly much efficiency we'd win, but *shrugs* every little bit helps right? | 23:57 |