01:22 FROGGS_ joined 01:48 ilbot3 joined 06:08 rurban_ joined 06:47 leedo joined 06:55 mtj__ joined 07:06 harrow joined 07:17 Ven_ joined 07:52 Ven_ joined 07:54 lizmat joined
jnthn moarning o/ 08:35
diakopter ooo///
nwc10 that it is 08:38
jnthn Also, Wednesday, meaning I get to do Perl 6 and Moar stuffs all day :) 08:43
diakopter eyes stuffs like a turkey on Thanksgiving
.. I must be hungry 08:44
jnthn ...well, I thought all day, then got grabbed for something else. But now I'm free. :) 09:37
09:39 domidumont joined 09:44 domidumont joined 10:11 domidumont joined
jnthn For anyone curious what I'm up to: re-working the multi-dispatch cache to handle named args, but also hopefully to be more efficient too. :) 10:59
lunch & 11:11
11:48 Ven_ joined
stmuk I'm returning 'cd nqp && make m-bootstrap-files' on different platforms/compilers (both amd64) and getting different results for sha1sum src/vm/moar/stage0/nqp.moarvm 11:48
s/returning/running
I assume this is expected? 11:49
12:03 lizmat joined
jnthn stmuk: How about if you do it with the same compiler in two runs? But yes, NQP uses timestamps at present to distinguish the bootstrap phases. 12:07
nwc10 we should use MD4s to make it both reproducable, and hackable for comedy value :-/ 12:08
jnthn We should do many things, but only have resources to do a fraction of them. :) 12:09
A smarter fix would be to use the sha1 of the source as today but append a -1 or -2 depending on bootstrap phase 12:11
stmuk jnthn: ah and I can also see absolute paths 12:12
I just wondering if its possible to reproduce the moarvm blobs checked in 12:13
jnthn Not unless the timestamps can be fiddled, I suspect 12:14
stmuk libfaketime is used in reproducible gcc builds I believe
I suppose if I used the same windows path, worked out the time and used "RunAsDate" it might work .. 12:29
and traced *all* the steps back .. or just give up :) 12:30
12:55 avar joined 13:02 brrt joined
brrt good * #moarvm 13:08
y'know whats /also/ high up the priority list
aside from 'getting more hours per day'
(we should move to mars.. mars has longer days?)
nwc10 I can't remember. Venus has *much* longer days. But the weather is always crap 13:09
brrt as a brit that shouldn't put you off?
me fixing the insertion of spills 13:10
and loads
nwc10 it's a different scale of "crap". And a nice cup of tea isn't good enough to compensate
13:12 lizmat joined
nwc10 google claims 1d 0h 40m 13:13
Mars, Length of day
so that's actually closer to what I thought I read that the body does (without sunlight to force a schedule) 13:14
what it's failing to tell me is what temperature water would boil at. (0.6% of Earth's atmosphere, but I dont' know how to translate that to "how cold is my tea then?") 13:15
brrt it differs greatly per person (biological clock)
average is about 25 hours, but some people have much longer, others somewhat shorter 13:16
nwc10 anywy, if the choice is only Mars or Venus, I'll go for Mars.
brrt not hot enough to be worth the drink, i think
and, i don't have enough physics on me to figure out what the boiling temperature would be 13:17
geekosaur ice sublimates, iirc 13:19
so "boiling" point is below freezing point 13:20
brrt that's right, i think
ice tea it is then 13:21
13:52 domidumont joined 13:59 lizmat joined 14:28 zostay joined
arnsholt If I remember Red Mars correctly, ice sublimates on Mars, yeah 14:35
14:42 lizmat joined
moritz water ice? 14:42
on en.wikipedia.org/wiki/Triple_point..._water.svg you can see the conditions for liquid water 14:44
basically if you have 0.1 atmospheres, it's only liquid from ~0C to 50C 14:45
oh, but mars has only 6mbar pressure
that's... not much
brrt 6mbar, 600 Pa? 14:46
moritz 600Pa, and the triple point of water is at 611Pa
brrt so, sublimation, indeed
moritz right
brrt actually, on mars most of it will be vapor 14:47
anyway, dinner &
moritz of course, googling "does water sublimate on Mars?" would have been easier 14:48
but figuring that out based on the phase diagram offers some kind of gratification :-)
14:59 lizmat joined
nine That was a short nap... 15:42
lizmat nine: I close my laptop when moving from one room to another :-)
16:03 domidumont joined 16:13 lizmat joined
dalek arVM/new-multi-cache: cffe519 | jnthn++ | src/ (3 files):
Start exploring a new multi-cache design.

This can handle named parameters also, and uses callsite interning to do less checks, and a tree rather than array or arrays for similar reasons. Still very much in testing, and re-integration with spesh still to come so probably slower in all the cases where that helped.
16:17
jnthn Note that Rakudo will also need patches to be able to take advantage of this, and that it's still not especially tested/polished. 16:18
timotimo the other day before falling asleep i was wondering if we should keep around an array of MVMCodepoint32 for the very common ascii range and maybe also a bunch of combinings (like up to -128 or something) and if a string is just one character long (like when we comb or substr) we'd just point into that array 16:21
jnthn Gets annoying when you gotta free it afterwards :) 16:22
timotimo if we have that array once per instance, it'll be a simple range check, though. maybe it's fine
jnthn What's the benchmark we're aiming to help, ooc? 16:23
timotimo it'll surely be a bit cheaper than mallocing a 4-byte big area, or mallocing a strand
i don't have one. as i said, it was during that strange time before falling asleep :)
jnthn I wonder a bit if a cache of MVMString * for the ASCII range would be easier and a bigger win
So if you see you're getting a 1-char string in that range you check if re-use is an option.
Which'd save a helluva lot of junk production with .comb for example 16:24
timotimo a few months earlier i actually thought we could totally store a single grapheme into the pointer we'd usually use to point at a strand or MVMGrapheme32 array
jnthn That's also an option :)
timotimo but that'd probably want special-casing in every single string op
jnthn Yeah, it's also a pain like that :)
Could be fragile 16:25
timotimo to be fair, it could even store two graphemes. that'll make it a few times more painful, though
jnthn Time for a break :) 16:27
timotimo is AFK for a bit, too
17:40 lizmat joined 18:20 domidumont joined 18:23 cognominal joined 18:30 cognominal joined 18:34 FROGGS joined 18:36 brrt joined
brrt good * again 18:37
FROGGS o/
brrt \o FROGGS 18:40
i wonder what kind of program we could organise for MOARCONF 18:41
nwc10 would there be breaks for talks between the coffee?
brrt if the venue allows it 18:42
FROGGS O.o
MOARCONF?
brrt yeah, like YAPC, but for moarvm or something 18:43
FROGGS and it would potentially happen in middle europe I guess
central* 18:44
brrt we talked, or joked, about it yesterday
but yeah
lizmat is +1 18:45
FROGGS aye 18:46
would be very very awesome to let jnthn explain awesome architectury stuff, and then we hack :D
brrt lizmat++ suggested at or after APW. that is not a bad idea 18:47
it's just that it'll be the third weekend of my working life and it might not be possible / a good idea to ask for free days already
so selfishly i'd think maybe somewhat later 18:50
19:21 lizmat joined
timotimo i wonder if i should just go ahead and start on the string cache thing 20:42
timotimo goes ahead 20:44
lizmat battery low&
21:05 rurban_ joined
timotimo timo@schmetterling ~> time perl6 -e 'for lines() { .comb }' < /usr/share/dict/linux.words 21:15
^- doesn't show a noticable improvement
it only takes about 3 seconds anyway, though 21:16
and only ~80 megs of ram
jnthn Well, you're racing the GC. GC costs something, but so will caching. 21:17
timotimo true. also, all those strings will die immediately 21:18
maybe i should instead have appended the result of .comb into a global list
huh, the number of GC runs doesn't even differ between the two runs; with and without my cache 21:19
365 for each of the runs
jnthn hm, does your cache even get hit? :) 21:20
timotimo yeah, i had a piece of output for that
tbh it didn't output as often as i thought it would 21:22
with the list that keeps the results around i get fewer GC runs when the cache is active 21:23
i think it might have to do with the check i made for "is the source MVM_...GRAPHEME_32?" which i didn't rely on at all 21:25
i fiexd that just now and will re-run the tests
i'm considering adding a little piece of code into the JIT for chr that'll try the cache before it invokes the C function 21:27
1226 GC runs vs 1334 21:28
ending in 4474 gen2 roots with the one and 10489 with the other 21:29
and at the 1226th GC run in the longer one, it's at 12693 gen2 roots 21:30
not entirely sure i understand that, but it's nice to have either way
jnthn: master, or into a branch? 21:34
also, you think there's another good benchmark for this? 21:35
the benchmark i have here is 42% (vs used-to-be 46%) GC time, btw
weird, i'm running into a whole lot of scalars allocated by dispatch:<!> (only a very small percentage compared to Scalars allocated in push, but still curious) 21:37
jnthn timotimo: Branch, I'd like to have a look at it. :) 21:38
timotimo good
and also i'm still convinced all the allocations for BOOTHash are either bogus (as in, shouldn't be in the profile) or bogus (as in, shouldn't be allocated) 21:40
jnthn Many are for *%_ :) 21:42
timotimo ha! i removed those scalars by having \name instead of $name
should we perhaps have a piece of analysis for those *%_? "this isn't assigned to, so just use shared global immutable empty hash"?
jnthn Well, spesh can eliminate it in many cases also 21:43
Unfortunately, spesh limitations prevent it doing so on various megamorthic things...like sink 21:44
timotimo in this case it's pull-one, and push twice
and, unsurprisingly, sink 21:45
and reification-target, iterator, append ... another sink
even "defined" :(
jnthn Yup
It's on my "things to deal with in the next spesh shake-up"
timotimo yes :) 21:47
i also found the pull-one of comb allocates a crapton of Int, i may have found the fix for it, too
4953680 times invoked, 4473852 times allocated 21:48
not sure what's up with that :)
m: say 4953680 - 4473852 21:49
camelia rakudo-moar fd98fc: OUTPUT«479828␤»
timotimo m: say 4953680 / 4473852
camelia rakudo-moar fd98fc: OUTPUT«1.10725165␤»
timotimo 10%? kind of strange 21:50
hm, that didn't fix it 21:51
jnthn Time for me to rest...'night o/ 21:55
timotimo gnite jnthn!
1149 gc runs now 22:04
4473852 IntAttrRef allocated by pull-one from comb; that'd probably be worth a bit when optimized away into just direct access 22:26
m: say 361 / (3449 + 361) 22:27
camelia rakudo-moar fd98fc: OUTPUT«0.094751␤»
22:27 lizmat joined
timotimo ^- that's the percentage of how much stuff gets committed to gen2 during the comb - append benchmark 22:28
there's so much more wins to be had, all in all ...
weird, there's postfix:<--> being run on Num 22:29
lizmat dinner& 22:33
dalek MoarVM/short_string_cache: 4bbb2c6 | timotimo++ | src/ (5 files): 22:58
MoarVM/short_string_cache: this adds a cache for strings that are one codepoint long
MoarVM/short_string_cache:
MoarVM/short_string_cache: in addition, i think it might be nice to also have one
MoarVM/short_string_cache: long array of all codepoint values that could be in the
MoarVM/short_string_cache: storages of these strings, but that'll add a check when
MoarVM/short_string_cache: freeing MVMString objects. Might not be worth it.
MoarVM/short_string_cache:
MoarVM/short_string_cache: I didn't add a mutex because i think it's not needed;
MoarVM/short_string_cache: updating the pointer shouldn't break when multiple threads
MoarVM/short_string_cache: race it, and since the strings are gc-managed anyway,
MoarVM/short_string_cache: things will properly be freed if they get overwritten
MoarVM/short_string_cache: by another thread. Also, things that end in the same will
MoarVM/short_string_cache: have equal contents, too.
diakopter dalek burn 23:21
23:29 cognominal joined
timotimo sacrificing 1kbyte to have storage for all those one-char strings doesn't sound so terrible, tbh 23:32
special-casing one- and two-point storage in strings seems a little bit daunting, though. especially since we still have that hash stuff, don't we? 23:33