github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm Set by AlexDaniel on 12 June 2018. |
|||
00:27
fake_space_whale joined
01:00
lizmat joined
03:24
fake_space_whale left
06:30
lizmat left
08:04
lizmat joined
08:19
lizmat left
08:26
lizmat joined
09:30
sena_kun joined
|
|||
timotimo | okay, i see what's going wrong with the profile data stuff | 10:19 | |
how does a null check segfault :| | 10:24 | ||
ah, it was just that the line number in gdb was off | 10:25 | ||
because optimizations | |||
Guest2775 | timotimo: did you sleep well? (dogbert in disguise) | 10:30 | |
timotimo | not very, actually :( | 10:31 | |
i'm at my parent's place and the bed i have here tends to give me back aches and such after a few days | |||
Guest2775 | :( | ||
usually we like optimizations :) | 10:34 | ||
timotimo | twitter.com/loltimo/status/1122086704809238528 ← lizmat, maybe this could interest you; why is github.com/rakudo/rakudo/blob/mast...ng.pm6#L15 calling Bool over and over? | 10:35 | |
lizmat: 462k calls to MERGESORT-REIFIED-LIST turn into 10 million calls to SETTING::Mu:101 (Bool) | 10:36 | ||
oh | |||
maybe it's obvious, it's also calling Str's infix:<cmp>, so i suppose that probably returns a p6bool | |||
oh, of course cmp returns Order objects | 10:40 | ||
mergesort-reified-list has infix:<cmp> inlined \o/ | 10:41 | ||
Bool is at the very top of the routine list by "exclusive time", though since it's also got the most invocations by far, it could just be profiling overhead, tbh | 10:45 | ||
hm, nah, identity has like 10x less time per invocation | |||
anyway, i can push fixes for profiling now :) | 10:48 | ||
lizmat | timotimo: yeah, maybe we should not use infix:<cmp> by default | 10:49 | |
but something lower level | 10:50 | ||
timotimo | perhaps | 10:52 | |
or | |||
instead of boolifying the Order objects, directly compare against them? | |||
and simply fail if a user-supplied cmp returns something that isn't an Order? | 10:53 | ||
i pushed the profiling fixes that dogbert17 made me aware of \o/ | 10:55 | ||
Guest2775 | yay: that might possibly have fixed the bug MasterDuke reported | ||
M#1071 | 10:56 | ||
meh, github.com/MoarVM/MoarVM/issues/1071 | |||
timotimo: did you have to make nqp changes as well? | 10:58 | ||
timotimo | no | 11:01 | |
the problem was that with our initial fix (putting the MVM_profile_dump* functions somewhere else) we accidentally called the dump functions once per thread | |||
that caused the resulting array to have "types array, thread 1 data, thread 2 data, types array, thread ? data, thread ?? data" | 11:02 | ||
and the nqp code expected only thread data (which is hashes) after the first types array (which is an array) | |||
lizmat | timotimo: the problem is that some custom infix<cmp> candidates may have been added... | ||
so to me the question really is: why weren't the calls to Bool optimized away ? | |||
timotimo | not entirely sure | 11:04 | |
AFK for a bit now | |||
perhaps since this is the result of "blah".comb.sort it could use something faster; it could theoretically know it's an array of string objects | 11:05 | ||
lizmat | fwiw, the 23.4% for Bool in the profile, is attributed to the proto! ???? | 11:11 | |
kawaii | Good morning o/ | ||
lizmat | kawaii o/ | ||
kawaii | I didn't check this channel for a few days, did anything happen with regards to a release for moarvm 2019.04? | 11:12 | |
sena_kun | kawaii, we need 2019.03 | 11:13 | |
or, rather, releasing 2019.04 is an option too, but I am more interested in resurrecting at least the latest | |||
kawaii | sena_kun: since I've not built moar before I'm not sure what is involved, perhaps someone can do something with one of the tagged releases here? github.com/MoarVM/MoarVM/releases | 11:14 | |
sena_kun | it contains source code. :( | 11:16 | |
Guest2775 | timotimo: I notice a tiny problem when looking at a profile in the browser | 11:20 | |
the Overview page says, among other things: The profiled code did 23 garbage collections. There were 8 full collections involving the entire heap. | 11:21 | ||
however the GC tab does not reflect that | |||
timotimo | what does it show? | 11:26 | |
Guest2775 | it's empty, only the headers are visible | 11:28 | |
timotimo | oh | 11:31 | |
that isn't right | |||
any output from moarperf? | |||
wait, is that the html profiler? | |||
Guest2775 | html yes | 11:32 | |
timotimo | any output in the console? | 11:33 | |
Guest2775 | what console | 11:35 | |
timotimo | the browser's debugger thingie | 11:37 | |
right-click "inspect element" gets you there | |||
what code are you running to get this? i do get gc run entries on my end | 11:38 | ||
Guest2775 | there are no errors that I can see | ||
let me try again ... | |||
timotimo: does this work for you? ./perl6 --profile -e 'for ^50 { my $cmd = run « cat CORE.setting.moarvm », :out; $cmd.out.close; }' | 11:39 | ||
the profile is 800k | 11:40 | ||
timotimo | i'd assume that most garbage collections didn't involve the main thread | 11:42 | |
hmm | 11:43 | ||
moarperf also shows fewer gc runs, but the total number is also not as high | 11:44 | ||
Guest2775 | it's a bit odd methinks | 11:45 | |
timotimo | yeah, something is off | ||
Guest2775 | off by one :) | ||
jnthn | Hmm, if something spends ages in the Bool proto it'd suggest tha tit's going through the slow-path dispacher, not hitting the cache | 11:49 | |
timotimo | didn't that used to also show up in the profile ... | ||
jnthn | I'd have thought it would | 11:50 | |
timotimo | i at least didn't see it in that one | 11:51 | |
in the speshlog it shows up with 51 total hits in the last report | 11:53 | ||
bind_one_param, that is | |||
jnthn | No, it'd be find_best_dispatchee I think | ||
timotimo | only shows up in string literals, not in reports | ||
jnthn | hmm | 11:54 | |
bbiab | |||
timotimo | sort_dispatchees had 1 total hits, 8 osr hits | ||
also BBL | 12:01 | ||
12:16
sena_kun left,
AlexDaniel left
12:17
AlexDaniel joined
|
|||
nine | How can I disable hash key randomization? | 12:25 | |
jnthn | nine: I think there's a #define for it in uthash_string.h or some such; forget the exact name | 12:40 | |
nine | Ah, MVM_HASH_RANDOMIZE | 12:43 | |
12:45
MasterDuke joined
12:46
MasterDuke left,
MasterDuke joined
|
|||
Guest2775 | ah, it's MasterDuke | 12:50 | |
MasterDuke | i see there have been some --profile segvs fixed, dogbert17/Guest2775++, timotimo++ | 12:54 | |
Guest2775 | MasterDuke: does it work for you, i.e. with the 130 meg file? | 12:55 | |
MasterDuke | looks good so far... | 12:56 | |
worked for a couple ^50 runs and one ^200 | 12:57 | ||
Guest2775: i think there are a couple other open issues about segvs when profiling, have you tested the new fix with any of those? | 13:00 | ||
Guest2775 | nope, perhaps I should | 13:06 | |
timotimo | it's probably only segfaults that happen after "writing profiler data to blah" has been output | ||
Guest2775 | MasterDuke: what are you in fact running here? github.com/MoarVM/MoarVM/issues/1023 | 13:10 | |
MasterDuke | Juerd's mqtt regex test/benchmark | 13:11 | |
timotimo | i hope that's not due to a stack overflow? but 200 frames should fit for sure | 13:12 | |
MasterDuke | `MoarVM panic: Internal error: invalid thread ID 60 in GC work pass` | ||
`MoarVM panic: Internal error: zeroed target thread ID in work pass` | 13:13 | ||
`Segmentation fault (core dumped)` | |||
all the same errors i got before. so the recent fix doesn't seem to have changed anything for that test case. but it is a tougher one since it has all those EVALs | 13:14 | ||
Guest2775 | oh no | ||
timotimo | damn | 13:16 | |
could you try that with the gc debug turned on? to 1 or to 2? | |||
MasterDuke | timotimo: btw, just added a couple new outputs to github.com/MoarVM/MoarVM/issues/1023 | 13:18 | |
and yeah, will do | |||
timotimo | the first look at that output is "backtrace is not helpful" | 13:20 | |
MasterDuke | even the non-jitted one? | ||
`MoarVM panic: Adding pointer 0x7ffff7102bd8 to past fromspace to GC worklist`, with GC_DEBUG set to 2 | 13:21 | ||
that's all i seem to get now. should i break on MVM_panic and get a backtrace? | 13:23 | ||
timotimo | yeah, that could help more | ||
MasterDuke | issue updated | 13:24 | |
timotimo | damn, how does that happen | ||
what's line 825 in src/profiler/instrument.c on your end? | 13:25 | ||
MasterDuke | oh, GC_DEBUG=2 makes t/02-rakudo/99-misc.t fail `not ok 3 - profiler does not crash` | ||
timotimo | i think i might have some debug output code or so added | ||
MasterDuke | mark_call_graph_node(tc, node, &nodelist, worklist); | ||
timotimo | ok, so it's the recursion call | 13:37 | |
i'll be mostly afk for the rest of the day soon :( | |||
can you give me detailed print-outs of the nodelist, node, and maybe also the worklist? | |||
i'll see when i can get back to this | |||
MasterDuke | in frame 1? | ||
timotimo | yeah | 13:38 | |
and perhaps also whichever i it has in mark_call_graph_node | |||
MasterDuke | ok. gotta recompile with --optimize=0, but will add those to the issue | 13:39 | |
13:55
lucasb joined
|
|||
dogbert17 has relocated | 14:08 | ||
MasterDuke | timotimo: hm, now that i'm running with --optimize=0 i'm not getting the MVM_panic in the exact same spot as before. i updated the issue with what i'm getting, but the backtrace doesn't have mark_call_graph_node anymore | 14:17 | |
dogbert17 | MasterDuke: what happens if you run with MVM_SPESH_INLINE_DISABLE=1 | ||
MasterDuke | dogbert17: no change | 14:19 | |
dogbert17 | :( | 14:20 | |
timotimo | any chance for running Configure.pl with --valgrind and valgrinding the whole thing? | 14:22 | |
MasterDuke | --optimize=1 was no different, trying --valgrind now | 14:23 | |
timotimo | --valgrind will add redzones to some of our allocators | 14:24 | |
MasterDuke | i still have GC_DEBUG=2, anything else i should set/unset? | ||
issue updated | 14:26 | ||
timotimo | that ought to be fine | ||
i wonder why you can't client-request a who_points_at from a valground program | 14:28 | ||
otherwise we could output a "what points at this address" report from valgrind at the panic position | 14:33 | ||
oh! | 14:35 | ||
hold on! | |||
it *is* possible | |||
it's VALGRIND_MONITOR_COMMAND("blah") | |||
MasterDuke | something to add to MVM_panic()? | ||
timotimo | yep | ||
also, there's VALGRIND_MAP_IP_TO_SRCLOC, which i think could be used to help valgrind understand what jitted code comes from | 14:36 | ||
oh | 14:37 | ||
MasterDuke | well, i did have the jit disabled there, but that's good to know in general | ||
timotimo | we actually already use the MONITOR_COMMAND in the FSA when it b0rks | ||
so my thought was that we should generate a command a la "who_points_at 0x4028028" when we MVM_panic for a GC_DEBUG-related error | |||
do you feel up to that? because i'll really be afk for a while soon %) | 14:38 | ||
not sure which exact value we should have "who points at" for | 14:39 | ||
oh, hmm | |||
maybe | |||
after flipping the nurseries we can ask valgrind what points at everything in there | 14:40 | ||
MasterDuke | can i just copy what's in the FSA and stick it in MVM_panic? | ||
timotimo | yeah, it'll be very similar code | ||
but actually | |||
maybe MVM_free_nursery_uncopied could have a "who_points_at" for every object it iterates through (even those that don't have to have gc_free called on them) | 14:41 | ||
aha, who_points_at <addr> [<len>] is the command | 14:44 | ||
and we actually want not only to-be-freed objects to be checked | |||
so perhaps we should "who_points_at start-of-nursery-address length-of-nursery-data" | |||
valgrind.org/docs/manual/mc-manual....r-commands - here's the docs on valid monitor commands | 14:45 | ||
wow, i didn't realize valgrind was able to also have validity bits for cpu registers | 14:47 | ||
that could have been useful for figuring out JIT problems | 14:48 | ||
like when we ported to little endian and something was using wrong struct access | |||
MasterDuke | i have to go do some work around the house, but i'll backlog and then try to test some of the valgrind stuff | ||
timotimo | OK, cool :) | ||
14:55
patrickb joined
|
|||
nine | I wonder what part of the bytecode moar --dump dosn't show | 15:11 | |
16:57
domidumont joined
17:23
lizmat left
18:01
Kaypie left
18:40
zakharyas joined
18:53
domidumont left
19:25
lucasb left
20:52
squashable6 left
20:57
squashable6 joined
21:48
zakharyas left
22:26
patrickb left
|
|||
MasterDuke | timotimo: around? | 23:48 |