github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
nwc10 So yes, Guido thinks that spesh is a good idea and Python should have it too: raw.githubusercontent.com/faster-c...onDark.pdf 07:39
timotimo i wonder if the "zero overhead" exception handling is a thing we have in moarvm right now? since we handle exceptions on top of the stack instead of rewinding first 07:48
japhb timotimo: github.com/timo/json_fast/pull/73 -- pretty please? :-) 07:51
timotimo oh! 07:52
nwc10 good morning japhb 08:02
clearly far too early in the mornign as I'm trying to hit tab to auto-complete "mor" to "morning"
japhb good morning nwc10 :-) 09:05
jnthn timotimo: Some control exception handlers get rewritten into goto in spesh, at least 09:13
timotimo true 09:29
anyway, their specialization approach is to have a per-instruction granular decision for specialized and how, or not 09:30
deopt now becomes trivial, but optimizations across opcodes aren't in the cards
i assume they also benefit from not having to do any cross-thread synchronization here, thanks to the GIL 09:31
other than that, their specialized ops include attribute access and invocation 09:32
i don't think they have a simple avenue towards inlining 09:33
which as we all know rakudo profits a lot from
but also, our benefits from inlining include parameter passing simplification and tossing out guards on types and such, the first one of which may not be easy for them 09:34
i don't see anything obvious for having multiple specializations of the same code, just that they deoptimize when expectations are violated and toss out the specialization when it's failed too often (compared to how often it succeeds) 09:35
jnthn Hm, this sounds more like inline caching than the whole spesh thing, going on what timotimo just described... 09:37
timotimo so would this be trouble if you iterate over, say, a list of [k, v, k, v, k, v] and just `print(it)` and keys are strings and values are ints? since specializing the loop body's print call would alternate between success and fail
yeah, they also describe it as "inline caching, but not totally"
The closest approach to this PEP in the literature is "Inline Caching meets Quickening" [3]. This PEP has the advantages of inline caching, but adds the ability to quickly deoptimize making the performance more robust in cases where specialization fails or is not stable.
jnthn Does that make new-disp "inline caching but a bit too totally"? :) 09:38
timotimo www.complang.tuwien.ac.at/kps09/pdf...thaler.pdf
haha, you bet
nine I wonder why they aim so low when they have 3 people working full time on this and already expect it to take multiple years 11:57
nwc10 you might be underestimating how hard it is 11:58
given the constraints of what they are not prepared to break 11:59
lizmat
.oO( that sounds oddly familiar somehow )
12:08
jnthn Probably retaining predictability makes it a bit non-trivial also, and that seems to be an important consideration for them 12:10
See "Virtual Machine Warmup Blows Hot and Cold" for just how "interesting" this gets; MoarVM has all the same troubles. 12:11