github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
MasterDuke blog.anp.lol/rust/2018/09/29/lolbench/ "automagically and empirically discovering Rust performance regressions", looks like a nice setup 01:00
brrt \o 06:26
nwc10 o/ 06:28
brrt I need to merge the bitshift work... 06:29
timotimo jnthn: any reason why "eliminate_pointless_gotos" can't move to after the post_inline_pass? 08:46
ah, merge_bbs benefits from it 08:47
before: 08:48
Frame size: 850 bytes (722 from inlined frames)
after:
Frame size: 806 bytes (678 from inlined frames)
m: say 850 - 722; say 806 - 678;
camelia 128
128
timotimo right.
m: say 722 - 678
camelia 44
jnthn How'd you get that? :) 08:50
timotimo your optimization for takedispatcher was incomplete ;)
it replaced the takedispatcher op with null, but nothing after that looked at the instruction to set the "known type" fact 08:51
and the post_inline pass didn't look at isnull instructions, neither did it look at if_i or unless_i
jnthn Uh, yeah, you can't optimize much inside inlines at the moment :)
timotimo and then for good measure i also added "eliminate_pointless_gotos" after it, and i could also "merge_bbs" again
jnthn Because there's no information on what is retained for deopt available 08:52
So the chances of screwing things up is very high.
timotimo i'll create a PR for the changes, i suppose?
jnthn Yeah, but I'm going to reject it on principle in the immediate.
timotimo right. if it does stay open, though, discussion can take place 08:53
jnthn Or well, defer it until we have the things needed to do it safely.
Right.
timotimo and it won't fall through the cracks
jnthn But I really don't want things going into spesh that I know are vulnerable to causing weird deopt bugs.
(Yes, it's pretty high on my list to retain enough information to let us do things like this safely.) 08:54
timotimo aye
Geth MoarVM/better_takedispatcher_opt: d857e7a59a | (Timo Paulssen)++ | 2 files
set and use facts on eliminated takedispatcher
08:56
MoarVM/better_takedispatcher_opt: f01dba6da1 | (Timo Paulssen)++ | src/spesh/optimize.c
opt out if_i, unless_i and pointless gotos post inline
jnthn fwiw, I think the better approah on this will be to do a normal optimize_bb pass on the inlinee - when we're got to the point where we can do it safely 08:57
Geth MoarVM: timo++ created pull request #979:
Better takedispatcher opt
08:58
timotimo here's the PR including a snippet of our discussion
timotimo yeah, spec tests give me a test that sometimes does MoarVM oops: Spesh: failed to fix up inline 0 (last) 552 -1 09:07
jnthn brrt: One more cases of the negative label thing just got reported 12:12
m: my $a = ('a' x 200).comb; $a ~~ s:g/<ws>//
camelia ( no output )
jnthn m: my $a = ('a' x 200).comb; $a ~~ s:g/<ws>// 12:13
camelia JIT ERROR: Negative offset for dynamic label 32
jnthn There :)
brrt yeah, I have a good hypothesis
thanks :-)
brrt grumbles because adding a 'noop template' isn't so simple as I thought it'd be 13:55
Geth MoarVM: 271e613f4f | (Jonathan Worthington)++ | src/spesh/plugin.c
Fix a leak in spesh plugin optimization
13:59
Geth MoarVM: ecbc1295ae | (Jonathan Worthington)++ | src/spesh/optimize.c
Fix compiler stub test in call optimization

So we bail out if we see one. Otherwise, we can end up with the high level code object being used with a fastinvoke, which expects a low level code object, and then explodes.
14:54
Geth MoarVM/pea: 82a8650f06 | (Jonathan Worthington)++ | 12 files
Minimal escape analysis and scalar replacement

This only handles the case where:
  * No aliasing
  * No deoptimizing instructions
  * No control flow (conditionals, loops)
... (7 more lines)
15:03
Geth MoarVM/pea: 3835915167 | (Carl Masak)++ (committed by Jonathan Worthington) | src/spesh/manipulate.c
Add missing comment to a new function
15:03
MoarVM/pea: f93d75ba45 | (Jonathan Worthington)++ | src/spesh/pea.c
Handle int64, num64, and str attributes in PEA
jnthn Le rebase :)
Geth MoarVM/pea: 69b813819a | (Jonathan Worthington)++ | 2 files
Introduce irreplaceable flag for allocations

Which indicates that we can't currently scalar replace them. In the future, we will often be able to just materialize the object at that point in time, if it makes sense to do so. For now, this means we can start to liberalize things a little bit. The most immediate way is to not bail the whole analysis just because one tracked allocation cannot be scalar replaced.
Additionally, properly handle PHIs that sneak in mid-block (due to other opts) and the argument to an object attribute bind.
16:05
MoarVM/pea: 40844daa11 | (Jonathan Worthington)++ | 2 files
Slightly liberalize handling of deopt in PEA
16:21
brrt I think I fixed the bug 19:48
AlexDaniel what about this one? :) github.com/rakudo/rakudo/issues/2340 19:51
brrt yeah, that one 19:53
don't panic, that was always a bug, it's just that now, I know about it :-)
AlexDaniel ok :) 19:58
timotimo , 20:03
,,
i'm sorry 20:04
the cat forced me to do this
Geth MoarVM: ceea63332b | (Bart Wiegmans)++ | 2 files
[JIT] Do not skip adding labels to PHI nodes

In some circumstances, apparently we can end up with PHI nodes with labels on them that are not at the start of the tree. These labels would be allocated but not emitted, which would (in this case) lead to incorrect exception handler bounds.
DynASM reported this but this (exceptional and very wrong) condition was previously being logged only to the JIT log, where nobody ever saw it. When I changed it to stderr, reports came in, and now it's fixed.
20:19
jnthn ooh, I wonder if this one could be why deopt for the JIT got -1's sometimes or some such 20:23
brrt possibly
yes
not 100% certainty, but possibly
timotimo jnthn: how far along are we towards "put deopt targets earlier if only pure ops are in front of it"?
jnthn It's not very high on my todo list at the moment; the existing work to improve precision has already cut out an awful lot of the deopt leftbehinds that we could eliminate, and guard elimination some more... 20:25
timotimo that's cool, good to hear
jnthn So I'm not sure relative to its complexity that it'll be a big win 20:26
timotimo that's fair. maybe i can at some point come up with a metric or visualization of some kind
jnthn Yeah, it'd be interesting to know if my gut feeling is off. Of course, I've also only looked at a small sample of all the spesh output :) 20:27
timotimo sure, sure
lizmat decommute& 21:10