00:21 benabik joined 01:16 FROGGS_ joined 02:04 benabik joined 02:08 jnap joined 02:24 donaldh joined 04:26 jnap1 joined 05:26 jnap joined 05:57 vendethiel joined 06:24 vendethiel joined 06:26 vendethiel joined 06:27 jnap joined 06:32 vendethiel joined 06:56 FROGGS joined 07:03 Ven joined 07:05 zakharyas joined 07:16 donaldh_ joined 07:28 jnap joined 08:29 jnap joined
jnthn short backlog is short... 08:53
FROGGS jnthn: come over to #perl6... we have cookies! :o) 09:05
09:09 domidumont joined 09:18 domidumont joined 09:28 woosley joined 09:29 jnap joined 09:34 domidumont joined 10:30 jnap joined
nwc10 jnthn: stage timing numbers are sufficiently variable on the Pi (with spinning rust swap) that I don't think that I can make sensible decisions about optimisation strategies 10:51
(other than "reduce the I64s")
but it is adequate for basic porting, and basic "does it pass tests"
I think that the problem is "spinning rust swap" - solid state swap (of any form) might be more repeatable 10:52
jnthn nwc10: Makes sense. At the point things are giving more stable numbers, profiling at C level may also be informative. 10:55
nwc10 there's more important stuff to nail first 10:56
portability
and performance on "modern" hardware against the percieved competition
where I am actually unsure as to what the percieved competition is 10:57
not just using wooly language to be diplomatic
11:31 jnap joined
jnthn Mostly for me it's not so much winning benchmarks as having people using Rakudo not immediately feel "ooh, it's slow!" 11:34
nwc10 yes, sorry, to be clear, by "performance" I didn't mean benchmarks 11:39
having viable concurrency is going to mean that "good enough" elsewhere is sufficient
whippitupitude and manipulexity for the win. 11:41
bonsaikitten just don't be consistently last in benchmarks ;)
12:32 jnap joined 13:32 jnap joined 13:35 jnap left 14:00 btyler joined 14:21 jnap joined 14:28 retupmoca joined
japhb I stand by my general feeling from last year: <2x slower than perl5: good; <10x slower than perl5: OK; anything worse: BAD. 16:07
TimToady well, we probably need to fix the O(N^2) in strings
nwc10 jnthn: I'm also thinkin that 10x is the minimal target
16:08 jnap1 joined
japhb TimToady: Yeah, string performance is near the top of my "still painful in r-m" list. :-/ 16:10
TimToady my gut feeling is that a lot of our improvement will come from better native int array handling, which will also be fundamental for NFG 16:11
it would be nice if we could write the NFG implementatoin in Perl 6 and have it be fast 16:12
jnthn Seems unlikely in the near term.
At least, it's not the fast path to having strings performing well.
There's just a bunch of hard things needing doing to get Perl 6 code to perform well. 16:13
I'm working through them, but there's a dozen other things needing my attention too.
TimToady sure, I'm just looking forward to the day when I can write my C code in Perl 6 :) 16:14
jnthn *nod*
The other thing that'd help on Moar is figuring out why we can't build with -O3 on gcc
TimToady is that also true of clang? 16:15
jnthn Not sure.
I do know that when I got some native int loop benchmark to run faster than my system Perl 5 locally by a factor of 2-3, nobody else could reproduce it with other compilers. Nor can I, for that matter.
r-m in my Linux VM using GCC is notably slower then r-m built with MSVC running outside the VM 16:16
Of course, virtualization may be a little to blame, but it can't take all the blame.
But we only gcc -O1 at present.
So -O3 may easily make up the difference.
TimToady I'm thinking we should have NFG and uniprops written in Perl 6 for portability, then backtranslate to native VM where necessary for speed 16:18
or at least in nqp rather than in C
jnthn NQP is more feasible. 16:19
But doing that right now is not likely to lead us quickly to faster strings. 16:20
Post-JIT/inlining/escape-analysis etc? Perhaps then it's OK. 16:21
nwc10 doing right now in NQP is likely to get to a working prototype more quickly, even if that implementation isn't (yet) fast 16:22
TimToady uniprops in non-C would give us uniprops on the other backends
nwc10 someone doing it at all is better than it not happening yet :-)
TimToady and just thinking we'll understand the semantics of NFG better if it's prototyped in a higher language
nwc10 meanwhile, unlike last Sunday, the prospects are looking good for the hammock
TimToady and certainly a linear but slowish P6 implementation of NFG will beat out a fast but quadratic C implementation for many real-world problems 16:24
jnthn True
TimToady and we could call it NFGish, and play with it, while keeping the default at codepoints for now 16:25
well, just thinking where I could apply the most leverage over this summer that would be complementary to what you're doing 16:26
jnthn If somebody wants to work on doing this at NQP level, that works fine for me. If so, I'll focus on trying to make it so the NQP impl runs fast enough anyway :) 16:27
16:27 lue joined
jnthn Well, question is what I should work on. I suspect performance and concurrency things and hard bugs? 16:27
TimToady makes sense to me 16:28
jnthn That's the kinda trajectory I'd been considering.
lizmat I wouldn't mind giving NFG a stab in NQP
TimToady another direction I'm anxious to push is multi-dim arrays, but I suspect someone other than jnthn++ should do that
lizmat didn't we have patches for that already ? 16:29
TimToady it does get down into repr issues thoguh
lizmat but jnthn wasn't happy with them ?
TimToady though, even
jnthn Well, yeah, it wants some level of VM support
native arrays surely do
Same for fixed size
japhb I still really want to be able to look at a single memory extent with many different packed array "views" on it. 16:31
TimToady just gets a little tired of writing .[@x]Ā».[@y] to do 2-dimensional slices, when [@x;@y] oughta work
and there's a bunch of RC tasks waiting for matrices :) 16:32
japhb rcdd
:-)
TimToady ayup :)
RC is a pretty good measure of generalpurposenesslessnesslessness 16:33
japhb Well it certainly does reward whipuptitude
lizmat
.oO( RC? )
16:34
nwc10 fail! a big cloud 16:35
japhb lizmat: Rosetta Code 16:36
lizmat ah, duh :-)
japhb++
17:03 FROGGS joined
jnthn OK, time to make dinner, then some hacking :) 17:47
17:47 woolfy1 joined 18:39 zakharyas joined
timotimo yay 19:19
i wonder if jnthn dinner'd successfully yet 20:21
lizmat perhaps jnthn is enjoying a post-dinner nap 20:23
timotimo mhm :) 20:24
lizmat or he is silently hacking away not paying attention to us attention cravers :-) 20:26
jnthn I cooked steak with bernaise sauce, nommed it, then felt like a walk :) 20:28
20:46 benabik joined
jnthn So, let's find out what exactly causes the NQP parse fail in the latest spesh stuff... 20:48
Seems it's something to do with using spesh log of invoke_o results 21:12
Darn, this is a pain to find... 21:16
hah, narrowed it down to soemthing involving the pblock rule and an opt done off invoke_o's logging 21:28
tadzik jnthn: do you have a working testcase for that closures/nativecall bug? 21:29
jnthn tadzik: No, I ran into it, investigated enough to find the workaround I sent you as a hack, and that was all... 21:30
tadzik ah
not sure if I told you, but the hack makes things segfault :) 21:31
jnthn ergh 21:32
OK, that didn't happen ehre
21:37 benabik joined
jnthn Grr 21:57
The more I narrow it down, the odder it gets
By now I've got it down to the exact transformation that breaks it, and it still makes little sense... :/ 23:07
Well, will sleep on it. 'night 23:14