01:17
FROGGS_ joined
07:53
woolfy left
07:54
woolfy joined
11:54
FROGGS joined
13:16
brrt joined
|
|||
brrt | hi #moarvm | 13:17 | |
timotimo | hey brrt | 13:18 | |
i see you've started introducing yourself :) | |||
brrt | i have | ||
i think my original plan will have to be edited somewhat | 13:22 | ||
since the first month or so has been made redundant by spesh | 13:23 | ||
timotimo | heh. | 13:24 | |
well, spesh can use a few more touch-ups anyway | |||
as soon as jnthn has a design for introducing extra registers and stuff, we'll be able to build more optimizations | 13:25 | ||
brrt | ... | ||
hmmm | |||
timotimo | otherwise, feel free to use the time to practice with dynasm instead | ||
brrt | i should talk to jnthn about taht | ||
timotimo | or to build a lua compiler + runtime to make the lua compile-time dependency irrelevant :P | ||
13:30
zakharyas joined
|
|||
FROGGS | brrt: it would not hurt if you still follow the goals and perhaps finish early and do extra stuff then :o) | 13:35 | |
brrt | i think the goals are the same :-) | 13:36 | |
(brb) | 13:37 | ||
14:00
brrt joined
|
|||
brrt | but, wait, introducing new registers | 14:01 | |
at first sight, that seems at odds with jit compilation | 14:03 | ||
timotimo | well, it would happen before the jit would kick in | 14:06 | |
during spesh, that is | |||
and it would have to create a proper CFG plus dominance and SSA anyway | 14:07 | ||
so the input to the jit stage wouldn't be distinguishable from regular data | |||
of course we'd have to be careful around deopt spots | 14:08 | ||
but IMO it's a necessary step towards better optimizations | |||
for example, an isstr, isnum, isint, ishash, islist could be turned into a spesh-time-repr-ID-check plus a run-time-null-check | 14:09 | ||
but that would require more instructions, because we have a "isnull" op, but we need the negated output from that, so we need to put in an extra negation | |||
or we'd have to know the result is only used once, in an if, and turn that into an unless or so. | 14:10 | ||
brrt | hmm i see | 14:12 | |
do i understand correctly if this is not so much about introducing new registers to the vm as using more registers in the bytecode? | 14:13 | ||
timotimo | ah, exactly | ||
i wasn't being clear :) | |||
should have said "allocate more registers" | |||
brrt | phew :-) | 14:15 | |
no, thats no problem to me | |||
timotimo | i suppose the problem really is that the first step is going over everything and building the SSA, CFG and dominance | 14:16 | |
brrt | but spesh does that, right? | ||
timotimo | and if you're in the middle of spesh, you would have to know about global things if you want to twiddle something in the middle | ||
brrt | uhuh | ||
timotimo | spesh does it at the very start, aye | ||
brrt | ok, my mental model of spesh -> jit is such: spesh takes a MVMStaticFrame, computes a CFG etc | 14:17 | |
you then have a tree that is supposedly in SSA form, on which you run all sorts of tree-manipulation algorithms | 14:18 | ||
timotimo | currently we mostly do intra-block-changes and then drop any blocks that become entirely irrelevant from the tree | ||
brrt | i can imagine some of these algorithms turning the tree from ssa form to non-ssa form | ||
timotimo | except if you're refering to the SSA as "the tree" | ||
brrt | no, i'm refering to MVMSpechGraph as the tree ;-) | 14:19 | |
timotimo | OK | ||
not very much happening there so far | |||
brrt | anyway, i imagine the MVMSpeshGraph to be input to the JIT compiler | 14:20 | |
timotimo | that's my understanding as well | ||
off the spesh graph hang all the BB's which contain both the instructions and the layout of the tree | 14:21 | ||
each BB has a "linear_next" that defines how the BBs will be output to byte code or machine code (as they have to be linearized *some* way) | |||
and the predecessors and successors list where control can come from and go to and inside the BB, there are different kinds of goto ops that cause branches | 14:22 | ||
brrt | hmmkay | 14:23 | |
moarvm 'registers' are really just offsets from the stack, right? | 14:27 | ||
timotimo | not sure about that, but it sounds likely | ||
they are not directly corresponding to machine registers, that much is for sure | |||
especially since we can have thousands of them active :) | 14:28 | ||
brrt | i should look it up | 14:30 | |
for most people it's an implementation detail, but not for me :-) | |||
i need to know how to treat them when compiling | |||
timotimo | :) | 14:31 | |
jnthn | .tell brrt the registers/locals are just a blob of memory hanging off ->work in the current frame. That memory also contians an args buffer. | 15:15 | |
timotimo | ohai jnthn | ||
jnthn | o/ | 15:16 | |
timotimo | should brrt help you think about a design for the "allocate new registers in the spesh stage" thing? | ||
jnthn | Well, design suggestions are always fine. | ||
Just need to be a fairly general mechanism. | |||
timotimo | well, if we're guaranteeing that a newly allocated register is "released" at the end of the changed segment, it should be all right to just ... you know ... "do it"? | 15:17 | |
of course we'll have to make sure the usage counts are there, else the end stage of spesh will just kick them out again :) | 15:18 | ||
jnthn | Well, whatever design we come up with needs to work out nicely with inlining. | 15:19 | |
timotimo | i haven't thought about inlining at all yet, hmm | 15:20 | |
not exactly sure what it'll end up looking like | |||
jnthn | I've some ideas, but probably the easiest way for me to work the design out is to implemnet it. :) | ||
timotimo | well, i'm looking forward to that :) | ||
i'd like to have a look at the code we generate for loops like for @foo Z @bar -> $a, $b { } | 15:22 | ||
i may build a few benchmarks to pit that against iterating over an index and grabbing the appropriate item out and seeing how they compare | |||
this is an idiom i'd really like to be cheap | 15:23 | ||
jnthn | Well, I suspect it's just infix:<Z>(@foo, @bar).map(-> $a, $b { ... }).eager :) | 15:25 | |
probably bbl & | 15:27 | ||
timotimo | and does the map do clever things? | 15:33 | |
FROGGS | jnthn: I have a question: should int be int32 on x86 platforms in perl6? | 16:02 | |
jnthn: nvm :o) | 16:25 | ||
in nativecall.c | 16:41 | ||
else if (strcmp(cname, "stdcall") == 0) | |||
result = DC_CALL_C_X86_WIN32_STD; | |||
else if (strcmp(cname, "stdcall") == 0) | |||
result = DC_CALL_C_X64_WIN64; | |||
that does not make much sense, does it? | |||
timotimo | yeah, that doesn't seem right | 16:48 | |
FROGGS | I think the second should be more like x64Win | 16:55 | |
timotimo | well that second branch is certainly dead at the moment | ||
FROGGS | sure it is | 16:56 | |
timotimo | so i suppose giving "x64win" a try for the cname wouldn't make it much worse, would it? | ||
FROGGS | ohh, in nqp/vm/parrot it is called "win64" | 16:58 | |
hmmm, one can set the calling convention using the role NativeCallingConvention | 17:00 | ||
dunno if someone really does so | |||
19:08
zakharyas joined
20:16
jnap joined
20:26
dalek joined
|
|||
jnthn | FROGGS: No, int probably wants to keep its Perl 6 meaning. I'd pondered introducing cint types that we export aliased to int32 or whatever. | 20:58 | |
timotimo | that would be at least trivial to put into the setting, no? | ||
jnthn | No, in NativeCall | ||
timotimo | er, yes | 21:02 | |
21:08
btyler joined
23:11
jnap joined
23:36
lue joined
|
|||
timotimo | pretend i'd have put the stuff about uthash here instead of in #perl6 | 23:36 |