01:09 lue joined 01:35 lue joined 01:59 ingy1 joined 02:07 camelia joined 04:00 tadzik joined 06:42 rurban joined
nwc10 jnthn: I think your list is incomplete - also, a year ago, we didn't have Rakudo working, did we? 08:02
08:07 zakharyas joined 08:11 FROGGS joined 08:18 kjs__ joined 09:09 kjs__ joined
jnthn nwc10: Perhaps not fully; I think the first monthly went with the first Rakudo release that had some level of support. 09:16
09:30 brrt joined
timotimo congratulations all around 09:53
"sp_guad"? :) 09:54
jnthn Context? :)
timotimo the changelog 09:55
FROGGS what changelog?
this? github.com/MoarVM/MoarVM/blob/mast.../ChangeLog 09:56
jnthn cdn.meme.am/instances/500x/58344467.jpg
FROGGS :P 09:57
timotimo the last dalek message
jnthn Found it :)
hah, the typo was from the commit log :) 09:58
dalek arVM: d1ed753 | jnthn++ | docs/ChangeLog:
Fix typo; timotimo++.
timotimo is AFK again 10:05
10:35 LLamaRider joined 12:12 brrt joined 12:27 rurban joined 13:57 zakharyas joined
jnthn Anyway have any release blockers? 14:04
dalek arVM: 104e48e | jnthn++ | VERSION:
Bump VERSION.
14:07
jnthn Well, clean NQP and Rakudo spectest with that commit (on Linux). 14:18
moarvm.org/releases/MoarVM-2015.01.tar.gz 14:22
lizmat jnthn++ :-) 14:23
jnthn Tagged in the git repo also. 14:24
dalek href="https://moarvm.org:">moarvm.org: 1cfbe69 | jnthn++ | / (3 files):
Update site for 2015.01 release.
14:44
nwc10 Result: PASS 14:45
(also Linux)
jnthn phew ;) 14:47
OK, release process done.
Time to break stuff again! :P
FROGGS declaration after code ftw! /o/ 14:50
14:52 brrt joined
brrt ehm... what would be a really convincing reason *not* to redo the whole JIT using llvm 14:55
context: i'm writing a blog post on future JIT improvements, and I find I'm not so well able to answer this question 14:56
more context: I want to make these plans in the open, so as to hopefully inspire people to hack on perl6 and the moar-jit 14:57
moritz llvm is a big dependency, for one 15:04
second, redoing sounds like a whole lot of work 15:05
brrt true.... 15:07
JimmyZ well, it's very easy, it's C++, and jnthn++ doesn't like it.
:P 15:08
brrt on the other hand... moarvm has a history of using modern libraries rather than reinventing the wheel
is 'it's very easy' a reason or your judgement of the reasons 15:09
i'm not very fond of C++ myself
i'm trying to make the case as an disinterested outsider
:-)
s/an/a/ 15:12
nwc10 summon leont
brrt where does he lurk
moritz #p5p?
nwc10 and #perl6
brrt i'll take this to #perl6 then 15:13
jnthn The issue isn't whether I like C++, it's whether I know it well enough or have the experience to make good design decisions if we build stuff using it. I don't. 15:30
We use modern libraries where it's in a non-strategic area, for sure. 15:31
Abstracting I/O, threads, and atomic ops, or building a better hash implementation, or FFI implementation, is not core domain. 15:32
We don't use, say, ICU, 'cus that's an area we're looking to do something innovative (of note, NFG) 15:33
And the dynamic optimization and JIT stuff is also quite core.
brrt fair enough. but you could argue that dynamic optimization is core and yet code generation is not 15:43
jnthn You could 15:44
But then, we ain't re-invented the whole wheel there either. 15:49
(Using dynasm)
brrt that is true
jnthn Which is a very light dependency.
moritz brrt: what are the arguments in favor of using LLVM?
brrt: and what has changed since the original implementation?
jnthn I know some time back, LLVM + precise GC could be some fun; maybe things are better there now. 15:50
brrt the argument is basically this - if we want moarvm to be faster, we'll need to move from our current JIT representation to something that is much closer to the metal
what has changed is that the original implementation is half a year old and we're ever looking for improvements :-) 15:51
so, suppose I'd tell an outsider 'our JIT is much to simplistic and can't do any optimizations', they'd be likely to answer 'why don't you just use LLVM' 15:53
the GC issue is an issue, yes
*not* that i'd like to throw away all the hard work done on the JIT just yet :-) 15:54
jnthn Well, when most people say JIT they mean not just the code-gen, but also the bits we tend to call spesh 15:55
It's the code-gen that doesn't yet do any optimizations.
brrt nods 15:56
that is very much true. And if we'd ever do a 'trace JIT', then we'd almost certainly do the trace part on the spesh level 15:57
jnthn Aye
brrt which, I think, is a pretty good design, all things considered 15:58
16:02 rurban joined
dalek arVM: e7fb495 | Carlin++ | src/ (2 files):
replace some rogue free()s with MVM_free()
16:04
arVM: c43f6a9 | jnthn++ | src/ (2 files):
Merge pull request #169 from carbin/free-as-in-mvm_free

replace some rogue free()s with MVM_free()
17:18 zakharyas joined 17:27 tgt joined 17:52 njmurphy joined 18:32 FROGGS joined 18:33 FROGGS_ joined 19:25 kjs__ joined 20:11 brrt joined
timotimo oops, two of these were mine 20:37
20:37 japhb joined
jnthn Rouge coder! 20:38
20:48 kjs__ joined 21:21 kjs__ joined
japhb Rouge coder? 21:23
TimToady better red than dead code 21:24
timotimo the best code is read code 21:25
japhb "read code" sounds like someone with a head cold saying "head cold" 21:26
dalek arVM: b40762a | jnthn++ | Configure.pl:
Bump default optimization level to -O2.

We've been on -O1 for a while. Bumping to -O2, Rakudo and NQP build and spectest/test just fine with both clang and GCC these days, maybe thanks to the Great Warnings Fixing. Since this is happening post-release, there's maximal time for feedback if it causes anyone issues.
21:37
timotimo fwiw, i've been building on -O3 for the longest time 21:38
jnthn m: say 766900362 / 854208697
camelia rakudo-moar 23c963: OUTPUT«0.8977903933␤»
jnthn 10% less instructions ran at startup just by switching to -O2 from -O1.
timotimo: Ah, maybe if all is well with -O2 this month, we can try -O3 next... :)
timotimo :) 21:40
does going to -O2 make the recent patches by nicholas less "effective"? :)
jnthn Hard to say; MVM_serialization_read_varint drops lower still in the table 21:44
timotimo how far down is it, ooc? 21:50
jnthn It's 8th
So really quite hot 21:51
Though that doesn't say anything about actual time
timotimo oh my
fair enough
jnthn bind_key is top
timotimo mhm, we stash lots of stuff into hashes 21:53
jnthn m: say 760773418 / 766900362 22:14
camelia rakudo-moar 23c963: OUTPUT«0.9920107692␤»
japhb Is that for -O3? 22:15
jnthn No
It's for me not being a moron when doing graph programming.
japhb Go on, drop the other shoe ....
heh
Moron penalty: 0.8%
dalek arVM: 961f584 | jnthn++ | src/spesh/graph. (2 files):
Cache rpo_idx on BBs, rather than recomputing it.
22:16
arVM: a29eaa9 | jnthn++ | src/spesh/graph.c:
Inline rpo_idx; it's far neater this way.
jnthn Dominance is a bit less time-consuming now. :)
japhb I read a story a long time ago, where one of the authors of x264 had a big wall chart of all the functions that made up the core encoding loop, and he had them marked with little squares, each of which indicated 100 CPU cycles per macroblock. 22:18
As he optimized, he would cross off each square he managed to remove from the core loops.
(And I sometimes wonder if he had thresholds for celebration as well.)
jnthn :)
timotimo is that noticable? 22:19
just the instruction count seems not as amazing, but it doesn't 1:1 cpu time of course
jnthn I didn't really check, though it was hunting through an array and so somewhat memory-bound.
japhb True, but 1:1 is actually not a bad first approximation for a modern processor, not counting context switches and other cache-destroying events 22:20
timotimo fair enough :)
22:44 rurban joined 23:00 kjs__ joined, brrt joined