00:27
colomon joined
|
|||
[Coke] | timotimo: shorter to write -j | 01:20 | |
01:48
ilbot3 joined
02:19
_longines_ joined
02:21
JimmyZ joined
02:29
nwc10_ joined
02:30
orbus joined
02:42
colomon joined
03:07
nebuchad` joined
03:09
xiaomiao joined
03:13
harrow joined
03:17
TimToady joined
03:19
_longines joined
04:26
tokuhiro_ joined
04:30
dalek joined
05:20
ShimmerFairy joined
06:08
vendethiel joined,
FROGGS joined
07:07
Ven joined
07:34
lizmat joined
07:56
lizmat joined
08:02
lizmat joined
08:11
lizmat joined
08:45
Ven joined
09:49
rarara joined
10:11
ShimmerFairy joined
10:13
Ven joined
10:42
brrt joined
11:16
Ven joined
11:51
Ven joined
|
|||
brrt | good * | 11:55 | |
jnthn | o/ brrt | 12:16 | |
brrt | how is today? :-) | ||
brrt kind of has post-yapc start-of-academic-year blues | 12:17 | ||
i don't want to start doing responsible things | |||
nwc10 | I am having massive difficulty concentrating | 12:18 | |
despite knowing what I want to do, and having a reasonably good idea how to do it | |||
jnthn is still exhausted | |||
Though I seem to be getting past the mental exhaustion some | 12:19 | ||
Still some way to go on the physical, though feel notably icky today 'cus my guts have decided to be unhappy (a fairly typical delayed-reaction-to-stress thing for me) | 12:20 | ||
brrt | ah, that sucks | ||
hope it get's better soon | 12:21 | ||
jnthn | Yeah. Means I need to eat boring stuff for a while | ||
brrt | gets | ||
jnthn | Though my wife insists it's not boring, but healthy :) | ||
brrt | in case you're not gluten-sensitive, i can recommend oatmeal :-) | ||
yes, terribly hipster food, but also boring and healthy :-) | |||
jnthn | I'm not :) | ||
brrt | fortunately | 12:22 | |
jnthn | Hm, I wonder if oatmeal stout also helps... ;) | ||
brrt | i'm not sure... | ||
jnthn decides not to do that experiment | |||
nwc10 | jnthn: mine also do that after various foods, particularly some that I enjoy (eg Indian) | ||
jnthn | Oatmeal stout can be a very good drink, though. | ||
brrt | nwc10: re: the pypy / rpython thing. i've been sufficiently critical of that approach on other times | 12:23 | |
but at least the pypy guys themselves believe that their approach can be generalized to other systems | |||
jnthn | nwc10: I've eaten Indian at least once a week for 20 years, so it normally has little effect; in fact, in normal conditions Indian food (if not stupidly hot) tends to have a calming effect. | ||
brrt | surprisingly few other teams do, but i'm not sure what that says about whom | ||
nwc10 | could you clarify "that approach"? Something about how they don't see their project as a VM? | ||
at least, the VM-like bit of their project not actually being a VM. | 12:24 | ||
brrt | well, it seems to me they have succesively tried to abstract away from the icky bits of writing a VM and JIT | ||
nwc10 | (nothing wrong with PyPy. Just how they self-describe) | ||
brrt | and wrote a meta-meta-meta-compiler-vm-jit thingy that ultimately resolves to a (very decent) JIT+interpreter system | 12:25 | |
so yeah. it's great for them. but i don't think it's been a practical approach | 12:26 | ||
nevertheless, they're years ahead of us | |||
nwc10 | for now. We can iterate configure/build/test cycles much faster than they can. | 12:28 | |
brrt | yes. and we've access to the papers written in the meantime :-) | 12:31 | |
nwc10 | what I'm also curious about is that the pypy release rate has slowed down. | 12:34 | |
Now we have 2.6.1 about 3 months after 2.6: morepypy.blogspot.co.at/2015/08/pyp...eased.html | |||
we used to have 2.$n+1 about 3 months after 2.$n | |||
brrt | well, they're 10 years old or so | 12:35 | |
nwc10 | not sure if they're scared of reaching 2.7 | ||
let alone 2,8 | |||
brrt | so they must have had a slower release date earlier | ||
nwc10 | er, 2.8 | ||
brrt | hmm, yeah, i guess i can see why not :-) | ||
12:45
Ven joined
|
|||
brrt | we're halfway through compiling a single 170-node expr tree | 12:50 | |
as a wise man once said, 'brrt, your assumptions suck' :-P | 12:53 | ||
FROGGS | *g* | ||
brrt | one of the things that i still want to do, is convert the tree structure back to linear structure, interleaved with 'special' tiles that do al the labelling magic now | 12:57 | |
timotimo | o/ | 13:00 | |
dalek | arVM/even-moar-jit: 821738c | brrt++ | src/jit/ (3 files): Fix a set of register allocator bugs |
13:07 | |
brrt | \o timotimo | ||
jnthn | .oO( If you made the same bug twice, would you be disciplined enough to say you fixed a bag of bugs? ) |
||
timotimo | brrt: how can you tell how far the compiler makes it? when i try to run something that uses it, i end up with a very unhelpful stack trace inside gdb :) | 13:08 | |
well, i *could* probably disassemble and have a closer look, but ... enh | |||
brrt | well, the compiler makes it to the end if it doesn't burn into flames | 13:09 | |
so you don't have to worry so much about that | |||
(i'm not sure if i would jnthn) | |||
anyway, i do actually disassemble and inspect registers and stuff | 13:10 | ||
i have a little gdb script that sets up moar+nqp for me | |||
timotimo | you as the jit author are expected to :) | ||
me as a "be amazed and cheer from the sidelines" type of person ... not so much :)) | |||
brrt | yeah, i kind of expected that | 13:11 | |
hmm | 13:12 | ||
oh, and i have MVM_JIT_BYTECODE_DIR set obviously | |||
timotimo | of course :) | ||
i bet the generated code already looks a lot nicer than the old code used to :) | 13:13 | ||
what with all the repeated loads and stores partially removed :) | |||
brrt | 'a lot' - i'm kind of sceptical there | 13:14 | |
also | |||
there's a bug in the kitchen | |||
as in | |||
timotimo | an insect trying to nom all your food? | 13:15 | |
brrt | i'm fairly sure i never do [reg1+reg2*4+0x48] | ||
timotimo | hm | ||
brrt | no, these are just my rats.... | ||
timotimo | :) | ||
brrt | ok, i see | 13:16 | |
timotimo | i forgot, we don't have common subexpression elimination, right? | ||
so the code is possibly a bit bloated at the moment? because we duplicate things all the time so we can properly tile the tree? | |||
brrt | we don't have CSE, indeed | 13:20 | |
yes, our tree is bloated | |||
no, that isn't to help tiling | |||
actually | |||
we do duplicate nodes to resolve tile conflicts | 13:21 | ||
but that's not the same as lacking CSE | |||
timotimo | mhm | ||
well, CSE would help that, i imagine | 13:22 | ||
that's why i lumped them together | |||
brrt | yeah, CSE could do nice things | 13:23 | |
one of the nicer things still is to automatically order computations shared between branches before the branch | |||
timotimo | ah, invariant code hoisting or something like that | 13:24 | |
brrt | it's a correctness issue. code hoisting is also really cool but i'm not sure if most effect would not be had at the spesh level | 13:25 | |
jnthn | Possibly. | 13:27 | |
brrt | looks like i have another code-generation bug on my hands | ||
jnthn | Me too :) | 13:28 | |
Though mine is the QAST -> JAST one I think others already banged their heads against for 10 hours... | 13:29 | ||
brrt | fun times :-) | 13:37 | |
FROGGS | no. | ||
jnthn | wtf | 13:38 | |
Yeah, it's odd alright | |||
FROGGS | hehe | ||
timotimo | jnthn: did you know the optimizer has a flag called "flattened" that "incorporate_inner" expects to be set if an inner block has been flattened, but we only initialize it to 0, never set it to 1? :) | 13:39 | |
sadly, setting it after having done a flattening operation reveals a problem that has been hidden before :| | 13:40 | ||
FROGGS | timotimo: do not disturb him nao! | ||
timotimo | oh | ||
FROGGS | :D | ||
jnthn | timotimo: I suspect that's a case of "I stole the logic from NQP but didn't get so far as making it work as well as NQP" :) | ||
timotimo | right, i just expected a short comment about it | ||
jnthn | FROGGS: With the JVM build times I've all the time in the world :P | ||
FROGGS | jnthn: hehe | ||
jnthn | It really does seem to get the tree for the entire CORE.setting just about | 13:41 | |
timotimo | jnthn: with my lex-to-loc branch it's even worse; there seems to be some oversharing going on that causes a QAST::Var to be turned into a local in a block that's in some totally different place :| | 13:42 | |
jnthn | What's really odd is that it shouldn't get that far | ||
Because nqp::p6configposbindfailover isn't implemented for the JVM | |||
But it should blow up saying so | |||
FROGGS | ohh | 13:43 | |
jnthn | Somewhere along the way | ||
Not get to the end and have a null | |||
FROGGS | jnthn: how did you find out that this op is not implemented? | 13:45 | |
jnthn | FROGGS: I didn't; I just knew it was the only added op, and was in that code a moment ago fixing the multi-dispatcher, and thought to search for it and found it's missing on JVM :/ | 13:47 | |
I'm currently trying to work out why it didn't explode | |||
rarara | For me, it is surprising that they Pypy people claim an improvement on simple cycles which sum two vectors using SIMD. I did many tests douring my phd and that didn't work. Check for instance this gist.github.com/anonymous/7dca6d4b7dbd12c19fb4 (gcc -S -mavx -std=gnu99 -O3 source.c) the nonparallelized run is faster for me; I don't think i'm making errors | 13:48 | |
(-S is for see code, run gcc without to compile) | 13:51 | ||
Non optimized, passed 0 sec and 450765901 nanosec Optimized, passed 0 sec and 467647205 nanosec Result checked! | 13:52 | ||
FROGGS | jnthn: do you also observe that we hit the CATCH block twice in as_jast(QAST::Op ...)? | 14:13 | |
jnthn | FROGGS: I decided to work on getting JVM closer to working with GLR rather than investigating the error reporting issue | 14:15 | |
(The latter is worth doing but with something rather smaller than CORE.setting ;)) | |||
FROGGS | jnthn: np, I can do that :o) | ||
jnthn | FROGGS: OK | 14:17 | |
FROGGS: I'm about to push a patch that clears up the JVM guts-y code some | |||
And adds throws to point out code that needs updating | |||
(rather than it exploding with odd errors) | 14:18 | ||
Do you want to take on filling those few bits out? | |||
.oO( Or is FROGGS not going to answer that until I've actually ended up doing them... :P ) |
14:27 | ||
FROGGS | *g* | 14:35 | |
I can take over filling holes I think | |||
15:20
FROGGS[mobile] joined,
ShimmerFairy joined
15:26
psch joined
15:53
FROGGS joined
16:25
Ven joined
17:17
Ven joined
17:50
Peter_R joined
17:54
tokuhiro_ joined
18:31
vendethiel joined
18:55
tokuhiro_ joined
20:14
brrt joined
|
|||
brrt | rarara: my suspicion is that this is related to a fewer total number of iterations in the vectorized case for pypy | 20:14 | |
i.e. the iterating loop isn't totally optimal, hence reducing the iterations increases the performance | 20:15 | ||
20:16
tokuhiro_ joined
21:06
lizmat joined
21:22
lizmat_ joined
21:55
lizmat joined
22:01
psch joined
22:17
tokuhiro_ joined
23:03
FROGGS joined
23:19
Peter_R joined
23:35
FROGGS joined
|