[Coke] | hoelzro: (portfile) ok | 00:10 | |
google news alert: park15.wakwak.com/~k-lovely/cgi-bin...page=PERL6 | 00:16 | ||
00:24
psch joined
00:34
arnsholt joined
00:41
hoelzro joined,
flussenc1 joined
00:44
perlpilot joined,
leedo joined,
rudi_s_ joined
00:45
nwc10_ joined
02:37
flussence joined
06:13
danaj joined
|
|||
FROGGS | o/ | 06:34 | |
07:58
TEttinger joined
|
|||
nwc10 | \o | 08:12 | |
10:03
brrt joined
|
|||
brrt | \o | 10:10 | |
jnthn | o/ | 10:26 | |
brrt | how is packing going along | ||
jnthn | Slowly, but it's getting there :) | 10:27 | |
Been catching up on sleep plenty too :) | |||
brrt | very nice | 10:28 | |
ok, i kind of get the feeling i'm spamming, but, i think i can do the following | 10:35 | ||
i can mark the REX byte by emitting a DASM_MARK action *after* it | |||
then i can OR in the bytes in DASM_VREG | 10:36 | ||
when i do need to add a REX byte, i should add this to the offset in dasm_link | 10:39 | ||
so to make space for it | |||
oh, that can be done using DASM_SPACE, which can be inserted at dasm_put time | 10:41 | ||
the size of the spaces, that is | |||
jnthn | Seems very state-machine-y | 10:47 | |
brrt | yes. dasm is very state-machine-y | 10:48 | |
jnthn | Makes sense for what it's doing :) | 10:49 | |
brrt | as in, dynasm the preprocessor takes a bit of a global view, but dynasm-the-runtime assembler works like a state machine | ||
it's interesting enough for a blog post, i think | 10:50 | ||
oh, jnthn, fwiw, we might want to coordinate our yapc talks a bit. I'll be talking about the relation between compiler and interpreter, and that has some intersection with vm engineering in general, i think? | 10:51 | ||
jnthn | Alternatively, we could just JIT everything to "mov"... :) github.com/xoreaxeaxeax/movfuscator | 10:52 | |
brrt | :-D | ||
i know | |||
i think there was a paper on that last year | |||
nice github handle though :-) | 10:53 | ||
jnthn | :-) | ||
Yeah, agree making sure our talks don't over-overlap is a good thing. | |||
brrt | right | 10:54 | |
jnthn | I was going to keep mine at least somewhat high level, and try to extract some of the more general lessons about things that have worked well for us. | ||
brrt | i was going to be talking about the tension between JITting and interpreting | ||
e.g. | |||
for efficient interpretation, - supposing your dispatch time is considerable - you actually want a lot of high-level operations that do complex calculations | 10:55 | ||
that's not what you want for JITting | |||
and how the interpreter can always assume to know the full state of the program explicitly, and a lot of that moves to implicit knowledge encoded in the compiled frame | 10:56 | ||
jnthn | *nod* | 10:57 | |
It's an interesting angle. :) | |||
brrt | case in point: the MVM_coerce_istrue() operations | ||
i hope :-) | |||
jnthn | It's also worth noting though, that: | ||
1) There's also the bytecode size angle; we check for object truth all over, so if_o/unless_o are quite helpful. | 10:58 | ||
brrt | right. that's precisely the kind of tension that i think is interesting | 10:59 | |
jnthn | 2) Many of the complex ops get lowered to simple ones in the process of specialization, so the JIT sees less of them. | ||
But yes, lots of interesting trade-offs to discuss :) | |||
brrt | that too | ||
anyway, lunch & | |||
jnthn | ooh, not a bad idea... :) | ||
brrt | i *think* i should be able to produce a hacked dynasm tomorrow :-) | 11:00 | |
11:37
colomon joined
12:44
Ven joined
12:48
Ven joined
13:02
flussenc1 joined
13:04
rudi_s joined
13:05
hoelzro_ joined
13:28
moritz joined
13:52
zakharyas joined
14:26
JimmyZ_ joined
14:46
JimmyZ_ joined
|
|||
FROGGS | side note: the libffi code path passes all rakudo sanity tests except the callback one... (which is not handled yet at all) | 15:26 | |
jnthn / brrt: do I need to care about spesh when it comes to nativecall? | |||
16:33
Ven joined
16:47
zakharyas joined
|
|||
jnthn | FROGGS: Not yet. | 16:50 | |
(Eventually we want to be able to JIT them awesomely.) | 16:51 | ||
FROGGS | jnthn: I guess that won't be trivial, when looking at the nativecall functions... | 17:04 | |
jnthn | Yeah, it's a project of its own :) | 17:05 | |
17:42
mj41 joined
|
|||
timotimo | fortunately we only have to jit the native calls for the platform we're already jitting | 18:22 | |
18:31
vendethiel joined
19:01
mj41 joined
19:15
zakharyas joined
|
|||
FROGGS | callbacks are working O.o | 19:17 | |
timotimo | sweet! | 19:39 | |
japhb | FROGGS: So what all do we gain from having both nativecall backends? More platform/arch support? | 19:40 | |
FROGGS | japhb: more platforms, yes | 19:41 | |
and I can imagine that libffi is a little bit faster when calling C functions repeatedly | 19:42 | ||
japhb | "imagine"? Is the API weighted in favor of faster repeated calls? Or do you have timings? | 19:43 | |
FROGGS | japhb: for dyncall we alloc/free more stuff when making the call into C, for libffi we do that once | 19:44 | |
japhb | Ah, OK, gotcha. Is libffi reentrant/threadsafe in that case? | ||
FROGGS | I think so | 19:45 | |
but I'm not an expert when it comes to that | |||
hmmm, actually, I'm not sure | |||
timotimo | if it has anything that's sort of global, then it's definitely not threadsafe | 19:49 | |
FROGGS | timotimo: the question is: how often do we call nqp::nativecallbuild per nativecall sub? | 19:51 | |
timotimo | i think we only do once a sub gets "set up" | 19:54 | |
you can find it in NativeCall.pm, i think it is guarded by a $!setup flag | |||
FROGGS | yeah, I know | 19:56 | |
so.. hmmm | |||
so I probably have to move the stuff that allocates the argument types and argument value slots to the call itself | 19:57 | ||
and not store it in the nativecallbody | |||
timotimo | at some point we want to do it differently anyway, so that we can know everything we need to know at jit time | 19:58 | |
also, the decont_all sub we use still annoys me a whole lot | |||
because we can't de"virtualize" the loop at all | |||
FROGGS | do we still have that decont_all? | 19:59 | |
I thought we got rid of it when making 'is rw' params work | |||
timotimo | i don't think so, let me have a look | 20:00 | |
yes, we do | 20:01 | ||
but it had to learn about native containers | |||
FROGGS | yes, I did that | 20:03 | |
20:04
vendethiel joined
|
|||
timotimo | right | 20:04 | |
21:16
FROGGS_ joined
|
|||
dalek | arVM: cad11bd | timotimo++ | src/spesh/dump.c: output a callsite's flags in the spesh dump |
23:16 | |
timotimo | the reason why optimize_can_op didn't work was because we just checked for the object pointer, but can_method_cache_only checked via MVM_is_null | 23:34 | |
i'm now running a spectest with re-enabled optimize_can_op | 23:35 | ||
which may get us rid of a bunch of pieces of sink code-gen | |||
only post-spesh, of course | |||
hmm | 23:46 | ||
if we have VMNull as the known type of something | 23:47 | ||
is there any way whatsoever for can to return 1 for any method? |