02:09 cognominal joined 03:30 colomon joined 03:47 japhb joined 05:10 flaviusb joined 07:25 domidumont joined 07:32 domidumont joined 07:42 FROGGS joined 09:43 leont joined 09:47 brrt joined
brrt good * #moarvm 09:47
timotimo good brrt, brrt :) 10:14
brrt good timotimo, timotimo 10:26
hey, good news, i hope to have finished my thesis report, or at least the first draft of it, by the end of this week; meaning that yes, i'll finally have some #moarvm time then :-) 10:27
jnthn \o/
FROGGS \O7 10:31
err
\o/
such excited
timotimo ooooooooh 10:34
brrt :-) 10:46
yeah, i'll be happy to start working on a real register allocator
will still be slowish, but more than currently
timotimo not more slowish, though? 10:47
brrt hopefully 10:57
timotimo :D
brrt nah, stuff is way to exciting to let lie so long
timotimo well, our time spent in spesh and jit is superbly small in all the code i've run so far
brrt oh, i meant *my* progress, not the jits speed
timotimo oh! 10:58
brrt it should be
timotimo re-parses the discussion
brrt thinking ahead of time, as i usually do, i'm wondering how we can string together multiple expression trees / basic blocks to form loops which we can compile together 10:59
hmmm
well,
when put that way 11:00
shouldn't be all that hard
we 'just' create a MVMJitExprLoop structure to contain the relevant basic blocks, and we run register allocation on the whole set
timotimo it seems to me like you get a lot of mileage out of "rubber duck design"
brrt i do, yes :-)
timotimo if we have such a loop structure, that may be an interesting cornerstone for doing loop-invariant code motion and such? 11:01
brrt i mean, it's loops we care most about
hmmm... yes, to a degree
although, i'd argue, we prefer.. that to be done in spesh
hey, that *would* be a good idea
question 11:02
would this be hard to implement in spesh
jnthn Only to the degree it's hard to implement anyway :)
timotimo fair enough
it's bothering me that i haven't started work on the deopt bridges stuff, but it's not bothering me enough to make me start work on it 11:03
i've thought about adding a little bit of analysis for how useful that could be, but it seemed like adding just that analysis would already be three quarters of the actual implementation ... 11:04
anyway, what's keeping spesh from kicking performance butt at the moment is, in my opinion, that we fail to inline a significant number of cases. and if there's lexicalrefs/localrefs involved, it straight up won't try 11:05
brrt if you need any motiviation, you have my assurance that deopt bridges will pay themselves handsomely in many cases, *especially* once the JIT becomes smarter 11:06
and especially-er if/when we start doing trace compilation
but yeah, that might probably be an even more directly relevant bit :-)
timotimo there's also some things i'm worried about related to lifecycle management 11:07
as in: i'd like spesh to run over the same piece of code multiple times in some cases
imagine a frame with multiple loops. the first loop is super simple, the other two not so much
OSR kicks in, adds a logging phase, but only the log instructions in the first loop ever get hit
then it's satisfied with the result and never touches that frame again, because the osrpoint instructions get kicked out (and i don't know if spesh won't get horribly confused if it hits osrpoints in an already-speshed frame) 11:08
jnthn I think we'll probably restructure things somewhat
timotimo i'm all ears (and a tiny amount of brain, too)
jnthn Around the same time spesh gets kicked off to run on a background thread
I'd like to do something closer to PICs that collect data over time 11:09
timotimo PIC? i only know that acronym from the microprocessor company
jnthn Polymorphic Inline Cache
timotimo ah
jnthn They're an interpreter optimization but more usefully you can look at the data they collected
So you get stats 11:10
timotimo oh, yes, i'd like to see that kind of thing :D
say, i read recently that moar lets nursery collections happen individually per thread without stopping other threads, is that true? 11:11
jnthn No 11:12
Only finalization is concurrent
We'll likely shove that off to a background thread also to cut down our overall pause times 11:13
timotimo right
brrt fwiw, spesh allocation is still awesome 11:21
it's the open-bar-party version of allocation
take as much as you need, and somebody else pays for it, who doesn't want that 11:22
timotimo hm. will a polymorphic inline cache be just a bunch of data in the instruction stream and an instruction in front that says "jump over this many bytes"?
jnthn brrt: You, the next morning? :P
timotimo jnthn: that'd be more than "as much as you need"
probably
jnthn Ah, point :P
brrt true enough 11:23
maybe i should have said, 'as much as you want'
still, it is a pleasant experience after mucking about with malloc/calloc/free 11:27
jnthn Yeah, unsurprisingly there were not many spesh related leaks :) 11:31
(They were all about leaks of things that remain after we specialized) 11:32
brrt yeah, like the jitcode things
i should merge that into even-moar-jit
12:11 FROGGS joined 12:26 khagan joined 12:51 domidumont joined
dalek arVM: 3a38749 | timotimo++ | src/core/interp.c:
check for NULL in exception payload and return VMNull in getexpayload
14:37
17:02 domidumont joined 17:43 leont joined 18:08 ggoebel16 joined 18:50 FROGGS joined 19:01 domidumont1 joined 19:18 leont joined 20:07 brrt joined
diakopter timotimo: probably also investigate how the payload could be NULL instead of VMNull? 20:28
timotimo creating a vmexception without setting the payload, obviously ;) 20:41
20:46 [Coke] joined 21:58 brrt joined