moritz aye 13:29
done 13:30
also timotimo
timotimo o/ 13:31
moritz oh hai
I've invited all Rakudo contributors from this month
timotimo has the time come where signal-to-noise in #perl6 has become too bad?
moritz dev signal, at least 13:32
user signal is still pretty high
timotimo ah, yes 13:33
jnthn Yeah...I gave up backlogging #perl6, but suspect I'm missing stuff.
moritz I've reconfigured rakudo and nqp to report their commits here, not in #perl6 anymore 13:34
guess I should do the same in roast
timotimo hmm. roast may want more feedback from users compared to rakudo and nqp perhaps?
perlpilot moritz++
moritz timotimo: don't think so
masak moritz++ 13:35
timotimo OK 13:36
perlpilot m: say "yay!"
timotimo carry on :)
perlpilot hmm
moritz the first answer after a fresh restart always needs ages 13:37
due to Bot::BasicBot's rate limiting
camelia rakudo-moar b243a9: OUTPUT«yay!␤»
RabidGravy erp 13:42
moritz Gesundheit 13:44
hoelzro o/ #p6dev 14:53
jnthn o/ hoelzro 14:54
jnthn So, time for some Perl 6 hackin' 15:19
RabidGravy eeeek! 15:22
timotimo poor mister gravy
jnthn So I'm looking at these compilation unit unique IDs we have 15:27
And realizing we use them for rather little down inside the VM 15:28
But also that we should be able to get away with making them way shorter 15:30
timotimo +1 on making them shorter :)
jnthn makes them just a stringified integer 15:33
I think their needing to be so very unique dates back to the Parrot days 15:34
timotimo "an integer"? does that mean something like "starting at 0"?
jnthn yeah 15:35
I think we might actually try and make them real integers in the future but it's...fiddly
(Bytecode back-compat)
timotimo right 15:36
we'll have a pretty significant saving none-the-less
what were cuuids before? like 32 characters?
jnthn cuid_NNN_timestamp before 15:37
now just NNN
jnthn Yeah, in Rakudo it saves 1.4MB off the empty loop program 15:37
nine So this is like the core developers channel?
jnthn nine: Yeah. Rakudo/NQP guts, hopefully at a traffic level that folks like me might be able to backlog again :) 15:38
nine nice
timotimo oh, wow.
jnthn: have you considered that for immutable strings (like the new cuuids), we can just use the fixed size allocator + strlen? 15:41
jnthn Well, I was just looking at them now and wondering, since we use them so rarely, whether to just look them up on-demand. 15:42
Not make an MVMString out of them at all
Damn, forgot to measure Rakudo's CORE.setting before the chance
*change
If anybody does get a number before/after feel free to pass it along, and it'll make it into the weekly report :) 15:43
timotimo i can do that 15:44
waiting for you to push the patch now
jnthn I did? 15:46
Where's dalek? :)
timotimo ah, there it is 15:47
(the patch)
i pulled rakudo and moar, but not nqp :D
jnthn ah, it was an NQP match
*patch
timotimo yeah, that's what i saw :) 15:48
m: say 100 * 11246944 / 11531000 15:50
camelia rakudo-moar b243a9: OUTPUT«97.536588␤»
timotimo m: say (11246944 R- 11531000) / 1024
camelia rakudo-moar b243a9: OUTPUT«277.398438␤»
timotimo not too shabby
even with the shorter cuuids, regenerating the stage0 files now still makes them a little bit bigger 15:55
jnthn And just shaved another 316KB off NQP's base memory for the empty loop program, and anther 1MB for Rakudo, in latest MoarVM commit :) 16:06
m: say 1088 + 1424
camelia rakudo-moar b243a9: OUTPUT«2512␤»
timotimo *neat* 16:07
jnthn So, Rakudo got at least 2.5MB lighter today, and these are also wins for each module you load
m: say 18228 / 46116 16:09
camelia rakudo-moar b243a9: OUTPUT«0.395264␤»
jnthn We're still a lot heavier than Perl 5 with Moose loaded, alas.
Skarsnik I just tested this gist.github.com/Skarsnik/03b970d2a4b827ba1e1d with today rakduo. It still leak but in different way, like sometime it keep a constant size between 2 iteration 16:10
perlpilot how much heavier? (order of magnitude wise)
jnthn perlpilot: See the calc I just did above 16:11
timotimo Skarsnik: are you running that valgrind piece with --full-cleanup, btw?
jnthn Left is Perl 5 with Moose, right is Rakudo
perlpilot oh
jnthn++
timotimo so less than 3x heavier
Skarsnik Yes the valgrind trace is with full-cleanup
it's weird that it stopped leaking then just some commit later it leaked again 16:12
but I should save the webpage locally, to see the size changes
jnthn Skarsnik: Yeah, that looks from the Valgrind trace like it's not a leak in the sense of "losing memory and not clearning it up" 16:13
But a "something stays referenced and can't be GC'd" one
Skarsnik Today trace is weird pastebin.com/rWCh5Gth 16:15
timotimo could be individual pages in the gen2, perhaps?
Skarsnik I need to restart my bot because of that :( 8032.875000 Mb /opt/bin/moar 16:17
I can run a profile later if you want (I am finishing fixing something else) 16:20
timotimo in general, you can build more robust systems if you make failing a central part of operations 16:23
if "i had to restart the bot because the process that does the xml parsing leaks like stupid" is a problem, i'd say separate the xml parsing from the bot itself
then you can eagerly restart the leaky process without impacting the bot much
Skarsnik Well, I am not sure how to do that without fork 16:24
I tried fork with NC but it make Gumbo (the C lib that parse the html) segfault 16:25
timotimo you don't know how to launch processes from perl6? 16:27
nine Which one will be easier to investigate? "Serialization Error: missing static code ref for closure '<unit>'" or "QAST::Block with cuid cuid_4_1458059076.06615 has not appeared"? 16:29
jnthn o.O 16:30
They're both horrible /o\
Skarsnik timotimo, well it make me write more stuff to share data x)
nine One is the result of re-using a SC for $*W in EVAL and the other from re-using $*W itself
jnthn Re-using $*W probably won't go that well... 16:31
Hm
nine Probably I shouldn't nqp::pushcompsc an SC that has already been pushed... 16:33
jnthn Probably not 16:34
Though I'm not sure that'd lead to the error in question
nine No it doesn't. 16:36
What does that error actually mean?
jnthn The first one is really weird 'cus it suggests we had a closure clone of the mainline (maybe of the EVAL'd code) and tried to serialize it 16:37
m: say 1088 + 1424 + 1036 16:38
camelia rakudo-moar b243a9: OUTPUT«3548␤»
jnthn That's today's savings off Rakudo with the latest MoarVM commit :) 16:39
timotimo them's good savings
jnthn Aye 16:42
There'll be more to be had, but that's enough memory reduction work for today ;)
nine Maybe it's looking in the wrong SC for the static code ref 16:57
jnthn RabidGravy: github.com/MoarVM/MoarVM/commit/99bf383cb1 17:01
RabidGravy jnthn++ nice one 17:02
only another 50% to go off that one and we're cooking 17:04
;-)
timotimo can i has that code, RabidGravy?
RabidGravy 'ang on 17:05
jnthn timotimo: gist.github.com/jonathanstowe/5e43...82c2166a52
RabidGravy gist.github.com/jonathanstowe/5e43...82c2166a52
timotimo jnthn was 50% faster :P
RabidGravy possibly not a canonical way of generation but I'm a bear of very little brane 17:06
timotimo we don't optimize that kind of gather/take to a simple list push?
i'd almost say "use lazy loop" instead
and slip the inner values
jnthn You know 17:07
That first sub cna be writtne as
sub gen-sin(Int $sample-rate, Int $frequency) { (0 .. ($samplerate/$frequency)).map({ sin(($_/($samplerate/$frequency)) * (2 * pi))})
}
gah, whitespace fail
But you can just return the list directly there, no? 17:08
timotimo i don't see how that would make it an infinite list?
jnthn oh wait
duh :)
RabidGravy rewrites welcome, it has to generate the CArray in say 256/44100 seconds
jnthn Yeah, and xx * isn't any faster than the gather/take it seems 17:09
m: say 256/44100
camelia rakudo-moar b243a9: OUTPUT«0.005805␤»
RabidGravy yeah the reason it does it like that is because the buffer is fixed and not an easy factor of the samples in a full cycle 17:10
timotimo turns out "lazy loop" doesn't lazy in this case 17:15
RabidGravy but yes, one could precompute the wave and keep feeding that. It's more of a proxy for "create some audio sample data" 17:16
timotimo yeah 17:17
i assumed so
would rotor break the lazyness somehow? :\ 17:18
my sub with the "lazy loop" never returns :( 17:20
i thought we totally had that. i wonder what i'm doing wrong
jnthn Did you put the loop in parens, or write lazy on it? 17:21
So about why sink allocates so often: it's a megamorphic issue 17:22
Spesh won't produce so many candidates 17:23
For every type we sink
And we don't spesh/JIT at all in that case
(Long-standing spesh design issue I want to take on)
timotimo i wrote lazy on it 17:24
lazy for * does it, though
er, that doesn't seem to do "forever", though 17:25
i can't seem to get it to lazy at all. 17:26
oh 17:27
m: my $res = do loop { 1 }; say $res[10] 17:28
camelia rakudo-moar b243a9: OUTPUT«1␤»
timotimo m: my $res = do lazy loop { 1 }; say $res[10]
camelia rakudo-moar b243a9: OUTPUT«(timeout)»
timotimo ^- do explain this? :P 17:29
so with just "do loop" it works lazily 17:30
what time-number do i have to hit? 0,0058, yeah? 17:31
RabidGravy yeah, I think that's the budget for a buffer that size 17:33
timotimo i'm closing in
about 0.007 here 17:34
RabidGravy (I can cheat with ALSA as you can adjust the buffer size to suit the application,) some more "real-time" audio APIs require a fixed buffer size
and I'm getting bored with typing that in this documentation ;-) 17:35
timotimo have you considered copy&paste? 17:36
RabidGravy it's always a slightly different context
maybe I should work up some "pod entity definition" thing
&<buff-size-spiel> 17:37
timotimo urgh :)
gist.github.com/timo/d97a284b884274e9e375 17:39
of course every time the gc sets in, it gets far worse 17:40
but in between gc runs i get consistently better results than 0.0075
m: say 0.0075 * 100 / 0.005805
camelia rakudo-moar b243a9: OUTPUT«129.198966␤»
RabidGravy I'll give it a try in a bit 17:42
timotimo places 3, 4, 5 and 6 in the routines-by-exclusive-time aren't jitted
12.37% of time spent in GC is a lot 17:43
timotimo hm. if we had a STORE for CArray, i could have a single CArray kept around that soaks up the values every iteration 17:44
RabidGravy I'm going to look at the CArray constructor when I'm done with this other thing, tests indicate it could be around 4x faster with an array if it just goes straight to the NQP rather than populating "nicely" 17:48
timotimo i have a much faster version that lets the $_ go from 0 to infinity 17:49
that's at 0.002-0.003 s/iteration
but it may not make sense for other instances of "generate some actual sound"
gather/take is actually faster than do loop { slip ... } 17:55
i bet there's a big opportunity for improvement in case we slip something big-ish
BBIAB
jnthn bah, I left rt.perl.org/Ticket/Display.html?id=127700 running under valgrind while I had dinner to see if it'd SEGV. It didn't even finish yet :) 18:58
RabidGravy well, if it doesn't segv it won't finish 19:08
I could make it do it consistently
jnthn oh 19:09
OK, then I'll just leave it
Got it running under gdb too
Guess it'll blow up eventually 19:10
RabidGravy never ran it with valgrind
jnthn Well, if it's a concurrency bug valgrind can end up hiding it 19:11
timotimo i wonder if it's important whether the sleep will be faster or slower than getting the data out 19:23
i.e. will the channel's buffer fill up or not?
jnthn Not sure 19:25
Anyway, can leave it to run for some time :)
nine How on earth can dalek excess flood when it did not say anything? 19:57
dalek kudo/nom: ec573e1 | (Andy Weidenbaum)++ | src/core/Numeric.pm:
fix Numeric eqv

  irclog.perlgeek.de/perl6/2016-03-15#i_12186016
19:58
kudo/nom: ba52706 | lizmat++ | src/core/Numeric.pm:
Merge pull request #723 from atweiden/numeric-eqv

fix Numeric eqv
timotimo it is a part of many channels 19:59
that's how :)
nine ooh...that also explains why it's so unpredictable when it gets kicked and when not 20:01
Hi lizmat! 20:02
lizmat moritz: I wonder how you invited me, because I missed it somehow 20:03
RabidGravy timotimo, that enhanced sine-generator is still not quite fast enough, It runs out of buffer after three iterations 20:09
timotimo i know
RabidGravy but nonetheless still quicker 20:10
timotimo especially the GC stuff
it really kicks the performance down 20:11
lizmat m: use nqp; dd nqp::list.^name, nqp::list_s.^name # is there a better way to differentiate between these two ?
camelia rakudo-moar ba5270: OUTPUT«"BOOTArray"␤"BOOTStrArray"␤»
RabidGravy once I've got v1 of the module out I'll give some more smacking around
a faster CArray constructor (which I know how to do) and a faster rotor (which I don't) would probably swing it 20:13
timotimo well, we're definitely allocating quite a bit more than we should be :|
RabidGravy if there's headroom it can cope with the GC
lizmat m: use nqp; dd nqp::list.^name, nqp::list_s.^name # jnthn timotimo is there a better way to differentiate between these two ?
camelia rakudo-moar ba5270: OUTPUT«"BOOTArray"␤"BOOTStrArray"␤»
timotimo lizmat: well, you can compare the WHAT with a single instance of list and list_s 20:14
lizmat m: use nqp: nqp::list_s.WHAT
camelia rakudo-moar ba5270: OUTPUT«===SORRY!=== Error while compiling /tmp/x6kTq61p6M␤Confused␤at /tmp/x6kTq61p6M:1␤------> use nqp⏏: nqp::list_s.WHAT␤»
lizmat m: use nqp; nqp::list_s.WHAT
camelia ( no output )
lizmat m: use nqp; say nqp::list_s.WHAT
camelia rakudo-moar ba5270: OUTPUT«Cannot find method 'gist': no method cache and no .^find_method␤ in block <unit> at /tmp/9JP98Bfd8K line 1␤␤»
timotimo m: my Mu $slist := nqp::list_s(); say nqp::list_s("a").WHAT =:= $slist.WHAT
camelia rakudo-moar ba5270: OUTPUT«===SORRY!=== Error while compiling /tmp/wXl0O1eJWb␤Could not find nqp::list_s, did you forget 'use nqp;' ?␤at /tmp/wXl0O1eJWb:1␤------> my Mu $slist := nqp::list_s()⏏; say nqp::list_s("a").WHAT =:= $slist.W␤»
timotimo m: use nqp; my Mu $slist := nqp::list_s(); say nqp::list_s("a").WHAT =:= $slist.WHAT
camelia rakudo-moar ba5270: OUTPUT«True␤»
lizmat I think I'll stick to the .^name, because I may not have elements in the list to be compared 20:16
timotimo er, huh?
lizmat ah, ok, gotcha now
timotimo i expect going via .WHAT has very much better performance characteristics
lizmat m: use nqp; dd nqp::list_s.WHAT =:= nqp::list_s.WHAT
camelia rakudo-moar ba5270: OUTPUT«Bool::True␤»
lizmat gotcha :-)
timotimo of course you can put the .WHAT of nqp::list_s into that variable 20:17
moritz lizmat: I invited you with a bog-standard /invite lizmat :-)
timotimo m: use nqp; say BOOTStrArray
camelia rakudo-moar ba5270: OUTPUT«===SORRY!=== Error while compiling /tmp/nukWGC9EAC␤Undeclared name:␤ BOOTStrArray used at line 1␤␤»
timotimo because we don't have that available as a name anywhere it seems
lizmat moritz: I guess my irc client doesn't support that then... 20:18
timotimo you can probably "use" something to get at it, but that could just be overkill
nine moritz: seems like irssi doesn't even highlight the tab where the /invite is posted. I didn't see it either.
moritz nine: my irssi shows it at as a notification in window 1 (where all the server messages also land) 20:25
hoelzro that's what happened for me; window #1 20:27
nine But it doesn't highlight window 1 in any way other than that there are new messages which is not unusual. 20:28
dalek kudo/nom: 93d597f | lizmat++ | src/core/List.pm:
Don't bother creating native str array if possible

The .join method doesn't need to create a native str array if $!reified already is a native str array.
20:36
lizmat jnthn: I'm considering creating an IterationBufferStr class, which would be the same as IterationBuffer, but with the _s nqp equivalents of push, atpos/ bindpos 20:37
and then, in List!ensure-allocated, if not initialized yet, initialize with IterationBufferStr if self.of ~~ Str 20:38
that should give an immediate benefit on any Str arrays / Lists, specifically wrt to things like join 20:39
timotimo hm, i'm not so sure how much we win from a list_s over a regular list
would we be putting boxed strings into the regular list in that case?
lizmat I think so
timotimo hm, ok
when do we actually end up with a $!reified that is a list_s? 20:40
lizmat timotimo: not many cases yet, it happens when we return a list_s directly from a method, e.g. Parameter.named_names
timotimo ah 20:41
well, the check'll be very cheap once things are speshed
lizmat yes
timotimo actually, i wonder ... i know we optimize "istype", but do we optimize "what"? 20:42
istype is a bit simpler, since it also does subtypes fine 20:45
timotimo not so sure 20:46
we'd have to look at the output of spesh for that in particular
lizmat hmmm.. I guess it makes more sense to implement "my str @a" 20:49
m: my str @a 20:50
camelia rakudo-moar ba5270: OUTPUT«===SORRY!===␤native string arrays not yet implemented. Sorry. ␤»
lizmat isn't this just a list_s underneath ?
timotimo we probably want it to provide more methods, and also be written in perl6 so that it doesn't incur hllize everywhere 20:51
dalek kudo/nom: 7d36a51 | lizmat++ | src/core/native_array.pm:
Fix signatures + 1 error message
21:12
jnthn lizmat: Hmm...is that something we'd keep internal? 21:15
lizmat: But yeah, I'd rather we implement my str @a
lizmat: It shouldn't be much work
lizmat: Just another case for array.pm to handle
lizmat jnthn: I'm looking at it and it looks like a cat license job ?
jnthn IterationBufferStr sounds odd :) 21:16
lizmat: Yeah, should be little more than that :)
lizmat jnthn: yeah, I realize that now
ah, but could it all be done in P6 ?
jnthn lizmat: my str @arr will be, storage wise, just as efficient as nqp::list_s 21:17
lizmat: And has the same represntation, so join will happily take it
lizmat indeed...
jnthn nqp::join, that is :)
lizmat ok, let me try to make that happen then
nine jnthn: I've made some progress...or at least a difference by binding the EVAL world's $!num_code_refs to the outer world's. 21:18
lizmat and possibly generate the inarray/numarray/strarray roles from a common source
jnthn nine: Hmm. :) And vice versa after the EVAL? 21:19
nine: We may want to steal a few bits of state
lizmat: Arrgh, not code-gen! :P
nine Vice versa? It's bound so, so it's the same variable anyway 21:20
jnthn nine: Ah.. :)
lizmat jnthn: so we rather copy-n-paste ??
jnthn lizmat: When it's only 3 cases, yes
nine jnthn: trouble is that the new behavior is a segfault. But at least it tells me that I'm looking in the right part of the code.
lizmat jnthn: I'm more worried about the maintenance cost of 3 roles being maintained by hand 21:21
if the only difference is int/num/str _i/_n/_s
jnthn lizmat: Which change very rarely...
perlpilot lizmat: whenever we get macros, the maintenance burden will decrease :) 21:22
jnthn RabidGravy: 'fraid that thing didn't explode under valgrind or gdb on my Linux VM after a couple of hours running...
lizmat and which I'm about to optimize, which I'd rather do once than 3 times with all of the possible mixups :-(
jnthn: so you have a problem with src/core/SLICE.pm as well? 21:23
(which is only 2 alternatives)
jnthn lizmat: I put up with it more than I like it :)
RabidGravy jnthn, maybe it got fixed "accidentally" as a result of some other change, I'll upgrade the rakudo and try again, if I can't reproduce with a newer version I'll close 21:24
jnthn RabidGravy: OK...I can quite imagine there is something there...
lizmat: But now it's there I don't think it's worth undoing it or anything 21:25
lizmat jnthn: to be honest, I don't really understand your apprehension 21:26
I think it's one of best examples of DRY
jnthn Aye, and I tend to find our industry over-addicted to DRY :) 21:27
perlpilot is makeSLICE run by hand? Or is there something that notices tools/build/makeSLICE.pl6 is newer than src/core/SLICE.pm and auto-makes it?
s/auto-make/auto-run/
jnthn Anyway, I won't scream if you decide to do some code-gen for array.pm, given you're likely going to do more with it in the near future than I am :)
lizmat perlpilot: makeSLICE is supposed to be run by hand
atm anyway, I guess some makefile magic could be added 21:28
dalek kudo/nom: d56b686 | lizmat++ | src/core/ (2 files):
Move permutations() to operators

It doesn't belong in native_array: it was only there because it needed native arrays, and putting it in the file made sure of that. However, now that I'm focusing on native arrays, it felt better to move this
  "corpus librum" to a better place, later in the CORE setting.
21:42
nine It really looks like there is a strong expectation of a 1:1:1 relationship of comp_unit, world and sc in the compiler's code. 21:47
jnthn nine: Yeah...well, there always has been to date... 21:50
nine IOW this may take a while after all. 21:51
jnthn nine: Yeah, sounds like it goes a bit deeper than I first thought... :(
nine Could even be that sharing the world may be the better approach after all. But we'll see.
jnthn I wonder if we'll be able to split off the piece to share onto a separate object that $*W in turn references. 21:52
nine Could be. Right now the source of trouble seems to be that after finishing a comp_unit the compiler really wants to serialize the SC which won't work all that well when there's another world still holding on to that. 21:53
jnthn nine: oh...because it's in pre-comp mode...hmmm 22:05
nine: Maybe we should tell EVAL'd ones that they're actually not in precomp mode...
nine: So they won't try to do that
Sleep time here...'night 22:08
timotimo gnite jnthn 22:20
RabidGravy jnthn, yep the *first* example in #127700 segv'd for me after 311 iterations (and taking the sleep out) 22:22
the second one didn't however so something may have changed
let me run that with valgrind and/or gdb to see where it gets us 22:23
and lawks valgrind is sloooooooow 22:24
timotimo yeah 22:25
RabidGravy but with gdb it segvs quite promptly and it is the same backtrace as before 22:28
timotimo interesting 22:32
RabidGravy actually let me update the other machine and test it there, because it so could be a weird pthread bug or something 22:37
yep it segfaults quicker on the other machine (which may just be "less memory") 22:57
fc23 vs fc22 on the laptop 22:59
so no close ticket 23:00
dalek kudo/nom: 8cbb1ef | lizmat++ | tools/build/makeNATIVE_ARRAY.pl6:
Native array role builder (WIP)
23:23
lizmat good night, #p6dev :-)
RabidGravy toodles 23:25