github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm Set by AlexDaniel on 12 June 2018. |
|||
00:00
dogbert17 joined
00:02
dogbert11 left
01:51
dogbert17 left,
dogbert17 joined
02:26
lucasb left
06:16
vrurg left
06:17
vrurg joined
06:19
cog joined,
cog__ left
06:53
cog_ joined
06:56
cog left
07:50
squashable6 left
07:51
squashable6 joined
|
|||
nwc10 | good *, #moarvm | 07:56 | |
MasterDuke | i would love to know why all of a sudden none of my `if (spesh_cand->body.discarded) { ...}` are hitting... | 08:12 | |
ha, ugh. it's because `cand->body.discarded = 1;` was commented out in MVM_spesh_candidate_discard_one. damn this switching branches and vim reloading files | 08:18 | ||
ok. now the print in MVM_frame_invoke happens. however, just to see what would happen i added a print if discarded in all the sp_fastinvoke_* ops in interp.c and they never hit | 08:22 | ||
wow. if i run it normally, `if (spesh_cand && spesh_cand->body.discarded) { fprintf(stderr, "oops, discarded\n"); spesh_cand = NULL; }` is hit (because a later `if (spesh_cand->body.discarded) { ... }` isn't being hit), but that print never happens. if i run it in gdb the print does happen | 08:27 | ||
and this takes 5s to run, could the prints really be buffered that long that they get discarded on exit? | 08:29 | ||
still doesn't print if i change it to stdout | 08:30 | ||
nine | MasterDuke: you could add an fflush(stderr) after the fprintf(stderr) | 08:40 | |
MasterDuke | nope, no prints | 08:42 | |
recompiling with --optimize=0 didn't change anything | 08:45 | ||
however, then recompiled without the --optimize=0 and now the prints appear | 08:47 | ||
removed the fflush, still appearing now | 08:48 | ||
08:53
zakharyas joined
|
|||
MasterDuke | assuming that was just some weird compiling/building mixup, now i'd like to figure out a way for the 688k deopts to be prevented | 08:53 | |
10:21
Kaiepi left
|
|||
jnthn | I assume you ran without JIT, otherwise the things in interp.c won't happen | 10:23 | |
MasterDuke | yeah. i get the 688k deopts either way, but running with MVM_JIT_DISABLE=1 now for ease of testing | 10:24 | |
10:24
Kaiepi joined
|
|||
MasterDuke | any suggestions? i can set the spesh_cand to NULL (either in interp.c or in frame.c), but that's not preventing the deopts | 10:42 | |
12:56
zakharyas left
14:46
zakharyas joined
15:33
leont joined
17:16
domidumont joined
18:03
patrickb joined
19:04
zakharyas left
19:23
domidumont left
|
|||
MasterDuke | i was curious about the consistent slowdown on my branch, here's what perf has to say for `MVM_JIT_DISABLE=1 MVM_SPESH_BLOCKING=1 raku -e 'my $r := "a" .. "za"; my @a = $r[^$r.elems]; say now - INIT now'` | 19:42 | |
gist.github.com/MasterDuke17/9116a...0a26501b8b | |||
and what do you know, MVM_spesh_arg_guard_run is called 637934 times more on my branch... | 19:47 | ||
21:21
patrickb left
|
|||
MasterDuke | hm, looks like a lot of the MVM_frame_invoke calls are from invoke_o, not sp_fastinvoke_o. those don't pass a spesh_cand, so it's got to look one up each time. but not sure why it's invoke_o... | 22:24 | |
22:30
japhb joined
|
|||
jnthn | MasterDuke: Is there any evidence that it produces a new specialization after the other one is discarded? | 22:48 | |
invoke_o is the unoptimized path | |||
I think in the static frame (or the static frame spesh data structure that hangs off it) there is also a number of times that we invoked it and did spesh logging for it; that count may need clearing for it to log and specialize again | 22:49 | ||
MasterDuke | in a spesh log i'm pretty sure i saw a specialization, then a removal, then another specialization, but let me confirm | ||
jnthn | OK | 22:50 | |
Not really at the keyboard at the moment, just wondering by...but probably will again a little later on too :) | 22:51 | ||
MasterDuke | yep. observed type specialization, removal because of too many deopts, observed type specialization | 22:52 | |
spesh_entries_recorded? | 22:57 | ||
off to sleep, perchance to dream... | 23:43 | ||
23:47
dogbert11 joined
23:51
dogbert17 left
|