github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
lizmat And yet another Perl 6 Weekly hits the Net: p6weekly.wordpress.com/2019/01/08/...-for-2019/ 11:58
jnthn Curious, I'm sure the pea branch was more explosive in the past 16:03
jnthn Still, there's quite enough segfaults for me to hunt :) 16:13
brrt ohai #moarvm 17:14
I copied jnthn shamelessly: brrt-to-the-future.blogspot.com/201...-post.html
(and somewhat shorter) 17:15
Geth MoarVM/pea: 9fbc8a4e54 | (Jonathan Worthington)++ | src/spesh/pea.c
Use correct indexes for deopt usage chain lookups

We sometimes have synthetic deopt points. When we have this, the index present in the deopt usage chain will be the original one that was cloned to make a synthetic one, and then the index itself is the one that will be the deopt point index to search for when we are really in the process of deoptimizing. Handle these properly when building the deopt materialization table. Fixes one of the crashes caused by the PEA deopt support in spectest.
brrt pea is moving along nicely 17:29
into obscure-bug territory :-) 17:30
jnthn Well, the "make it understand deopt" phase is, anyway 17:36
Still lots more things that it needs to do in order to make a really good job
Though already I see it's happily doing away with various Scalars. :) 17:37
brrt++ # blog 17:38
Hmm...next failing spectest only blows up occasionally...grmbl 17:49
Maybe I'll pick a different one; this one does call rand, I now see...
But, tomorrow :)
brrt today, beer? 17:52
.oO( I like that idea )
jnthn Bit later, yeah...chili con carne first :)
bbl o/
brrt Oh, that's not a bad start of an evening
o/
timotimo i just randomly worried again about order of optimizations. like, maybe turning operations into direct memory operations should go in a very late step so that semantics of different operations can be used by the later stages, too 19:14
for example, boxing elimination wouldn't immediately be able to turn a p6oget_i into a set on the original register, i don't think
i don't have any sensible idea for any solutions. like, we wouldn't want to store two versions of the same thing with some kind of priority, because then we'd have to have basic block fragments or something 19:15
and all the code needs to be careful to handle situations where there's multiple versions 19:16
and facts and guards and ugh
[Coke] just keeping running all the opts until nothing changes. 19:17
MUAHAHAHAHA
nwc10 This is the "simulated annealing" school of optimisation? 19:19
japhb nwc10: It's actually a reasonable practice -- it's attempting to find a fixed point of the optimization contractor 19:21
Mind you, it assumes you have a LOT of time to run the optimizer
[Coke] might be a nice mode to have if you're compiling bytecode for distro 19:23
nwc10 I was being somewhat faceitous, but was figuring something like what you've made clear (and far more helpfully than my comment) - this keeps CPUs toasty warm, and isn't really great for a *J*IT. And what [Coke] says
everyone says more sensible things than me tonight :-)
er, "this morning"
[Coke] afternoon here. :) 19:24
nwc10 it's dark outside here. But it's about the same darkness most days when I've been getting up.
(but we're trying to get the children to go to bed, rather than trying to get them up, so that probably eliminates the ambiguity) 19:25
timotimo the problem is that some optimizations prevent different optimizations 19:34
so just doing all you can over and over doesn't actually help there 19:35
japhb timotimo: Granted, it's not just a matter of wrapping a loop around the top level of your optimize function. I think it more often becomes: prep input -> loop(early optimizations) -> do early-to-late conversions -> loop(late optimizations) -> final write out 20:04
Or something similar
masak assuming the search space is continuous enough, one could toy around making small changes, moving/adding/removing optimization steps and measuring the results 20:31
should probably make sure to measure on a rich-enough set of code bases, though
jnthn I think box/unbox elimination is subsumed in scalar replacement 20:55
So I'm not too worried about that case
Well, it is in one direction, at least :) 20:56