00:11
benchable6 joined
00:18
rba joined
00:23
travis-ci joined
|
|||
travis-ci | MoarVM build failed. Jonathan Worthington 'Ensure ->caller/->static_info can't get outdated | 00:23 | |
travis-ci.org/MoarVM/MoarVM/builds/286136655 github.com/MoarVM/MoarVM/compare/2...9a338b2369 | |||
00:23
travis-ci left
|
|||
Zoffix | just the bump syncage issue | 00:31 | |
00:31
rba_ joined
01:55
ilbot3 joined
02:08
travis-ci joined
|
|||
travis-ci | MoarVM build failed. Zoffix Znet 'Merge pull request #726 from jsimonet/patch-1 | 02:08 | |
travis-ci.org/MoarVM/MoarVM/builds/286204762 github.com/MoarVM/MoarVM/compare/d...9eb46642e9 | |||
02:08
travis-ci left
03:52
evalable6 joined
|
|||
ugexe | has async io been left out because of tuits or design? the libuv plumbing seems easy enough, but api wise doesn't seem like it would be as simple as syncsocket/asyncsocket split | 04:31 | |
Geth | MoarVM: jsimonet++ created pull request #727: typo |
05:26 | |
timotimo | well, it was certainly night … | 05:52 | |
yoleaux | 02:09Z <MasterDuke> timotimo: i've got a question about multis and find_best_dispatchee here irclog.perlgeek.de/perl6-dev/2017-..._15286003, with some relevant info here irclog.perlgeek.de/perl6-dev/2017-...i_15286163 | ||
timotimo | MasterDuke: i have no intuition about find_best_dispatchee :( | 05:54 | |
05:55
domidumont joined
06:00
domidumont joined
06:04
domidumont joined
06:15
brrt joined
|
|||
brrt | good * #moarvm | 06:15 | |
timotimo | hey brrt | ||
top of the morning to you | 06:16 | ||
brrt | yes, you too | ||
timotimo | i just investigated a fun bug, where the Proxy returned by a SetHash's assignment gets returned from a lock.protect block and sunk outside of it, causing multiple threads to concurrently look at a given key and segfault | 06:17 | |
concurrently with the given key being deleted | |||
brrt | that's | 06:18 | |
nice | |||
for the record, i still blame the Container abstraction for 90% of what's wrong with perl6 | |||
timotimo | really? containers are really powerful, though | 06:19 | |
or would you rather we had implemented explicit references instead? | |||
brrt | i dunno. maybe | 06:20 | |
yes | 06:21 | ||
something being powerful is never by itself a proper reason to support it | |||
random gotos to memory locations are also powerfull | |||
*powerful | |||
06:23
patrickz joined
|
|||
brrt | oh, timotimo++ on the performance analysis tooling grang | 06:23 | |
*grant | |||
timotimo | thanks | ||
how cool is it that uthash already has a size parameter ready for all its uthash_free calls | 06:36 | ||
brrt | i don't know how cool that is | 06:37 | |
timotimo | it's rather cool if you want to replace its use of malloc with fixed size allocator calls | 06:41 | |
brrt | i see | ||
is the FSA multi-threaded | 06:42 | ||
timotimo | yep | 06:43 | |
that's not the important part, though | |||
we use the "free at safepoint" mechanism to make that if other threads were looking at the hash at the same time they still have memory they're allowed to do whatever they want with | 06:44 | ||
the fsa takes "size, pointer", uthash has it "pointer, size" %) | |||
i have no good reason to work on this, i should get started on my grant instead %) | 06:47 | ||
brrt | i should be working on so many things | 06:49 | |
timotimo | hey look every perl6 run now gives a big "free: invalid pointer" right away | 06:51 | |
brrt | progress? | 06:54 | |
timotimo | no more crashes | 06:58 | |
the crash in the example program is now no longer reliable | 06:59 | ||
oh, no, it totally still is | |||
well, all of the work i've done right now will be required when somebody wants to fix this for real | 07:00 | ||
might as well push it upstream | |||
er, into the git repo i mean | 07:01 | ||
Geth | MoarVM/mvmhash_use_fsa: 0908c9a4c7 | (Timo Paulssen)++ | 8 files use the FSA for MVMHash and friends it's supposed to make concurrent hash accesses not as crashy by using the "free at safepoint" mechanism, but more changes are needed to make it any safer than it was before. This commit creates a bunch of versions of the HASH_ functions that end in _FSA and take a tc argument and then use the fixed sized versions of alloc and free. |
07:07 | |
07:26
brrt joined
|
|||
brrt | sounds like progress yes | 07:29 | |
timotimo | oh, only now do i notice that in my grant proposal there were … that got turned into … | 07:30 | |
good job, movable type | |||
brrt | i don't think i've read your grant proposal yet | 07:35 | |
timotimo | news.perlfoundation.org/2017/09/gra...l-6-p.html - that's the url | ||
"â—‹ GC explorer" - the hell is this? | 07:36 | ||
dang, the indentation also got messed up a little | |||
07:43
jsimonet joined
|
|||
timotimo | wow, perl6 -e 'say "hi"' is really consistent when run under callgrind | 07:53 | |
that's nice to see | |||
(only with spesh disabled, of course) | |||
445,654,044 Ir with hashes using malloc/free, 445,301,169 Ir with hashes using the FSA | 07:54 | ||
doesn't blow me away, but could be a nice saving perhaps? | 07:55 | ||
69097 average maxresidentk (range 68832..69420) with fsa | 07:57 | ||
69152 average maxresidentk (range 68904..69364) with malloc/free | 07:58 | ||
ran it 9 times each | |||
looking at the file i sent over to [Coke] for the grant application i can see 1) it's my fault the … got messed up, 2) it's not my fault the indentation levels got messed up | 08:09 | ||
08:10
zakharyas joined
09:00
robertle joined
|
|||
timotimo | i can hardly believe our ref-indexes and ref-tos arrays need to hold 64bit ints, since they are indices to other arrays and if we have more than can be addressed with 32bit we're potentially screwed :) | 09:45 | |
Geth | MoarVM: ed7831dee7 | (Julien Simonet)++ (committed using GitHub Web editor) | docs/jit/overview.org typo This -> "" |
10:00 | |
MoarVM: b2ea17d7d8 | (Zoffix Znet)++ (committed using GitHub Web editor) | docs/jit/overview.org Merge pull request #727 from jsimonet/patch-2 typo |
|||
10:03
rba joined
10:27
brrt joined
|
|||
timotimo | i think the heap analyzer could really benefit from splice on same-typed native arrays being faster | 10:28 | |
10:29
leont joined
|
|||
timotimo | or i should forget about splitting the work there and then not have to splice data | 10:32 | |
and for some reason memory usage keeps growing even though it *should* only be working inside super fast C code | 10:33 | ||
10:34
harrow joined
|
|||
timotimo | oh! | 10:39 | |
it's not using nqp::splice, it's doing a straight-up loop with push and pop | 10:40 | ||
i thought .splice was a kick-ass optimization, but it turns out .append is much directer | 10:42 | ||
about 800 megs maxresidentk now | 10:43 | ||
the before-measurement is not going so well ... 8.5 gigs already | 10:49 | ||
so yeah, i know who the winner is | |||
huh, i wonder if the *@ in the signature for splice is problematic here | 10:50 | ||
12 gigs, wow | 10:52 | ||
MFW OO::Monitors dies with an "invalid BUILDALL plan" | 11:15 | ||
installing it, that is | 11:16 | ||
oh, no, it was installed - perhaps with an older version - and testing a cro component broke | |||
when i'm splicing a few million elements from one array into another ... maybe it should do a gc sync point every now and then %) | 11:34 | ||
dang, it's not using the impl i wrote for my native int arrays .. | 11:50 | ||
jnthn | We should try and memcpy in splice when we determine that we can do so | 11:58 | |
yoleaux | 02:09Z <MasterDuke> jnthn: i've got a question about multis and find_best_dispatchee here irclog.perlgeek.de/perl6-dev/2017-..._15286003, with some relevant info here irclog.perlgeek.de/perl6-dev/2017-...i_15286163 | ||
timotimo | yup, i just implemented that | ||
jnthn | ah, nice | ||
timotimo | what worries me is that this doesn't even go into splice at all: | 11:59 | |
m: my int @a = 1, 2, 3; my int @b = 9, 9, 9; @a.append(@b) | |||
camelia | ( no output ) | ||
timotimo | m: my int @a = 1, 2, 3; my int @b = 9, 9, 9; @a.append(@b); say @a | ||
camelia | 1 2 3 9 9 9 | ||
jnthn | Maybe .append isn't yet implemented in terms of splice? | ||
timotimo | there's a candidate for intarray that would accept a intarray @b in there | ||
that would use splice | |||
it just doesn't get taken | |||
jnthn | intarray @b would be an array of intarry | ||
timotimo | oh, yes | 12:00 | |
it's intarray:D: int @other | |||
i blame the *@ and **@ candidates i guess? | |||
jnthn | Maybe but slurpy should be less specific | ||
timotimo | s: (my int @), "append", \(my int @) | 12:02 | |
SourceBaby | timotimo, Sauce is at github.com/rakudo/rakudo/blob/a722...ay.pm#L427 | ||
timotimo | that is indeed the one that runs nqp::splice ... ?! | ||
hom. now it does use it. i wonder how i measured it wrong before? | 12:04 | ||
ah, of course | 12:09 | ||
m: multi doit(int @foo) { say "native int array" }; multi doit(@other) { say "other" }; doit(my int @); doit(my int32 @) | |||
camelia | native int array other |
||
timotimo | it's because it's not an int64 one but a int32 one | 12:10 | |
hm, so ... i could also implement a memcpy-based version for object arrays. i'll just have to go through the array another time to make sure everything's barriered correctly | 12:12 | ||
but if the array itself is in the nursery still, i don't have to barrier, right? | |||
Ambiguous call to 'append'; these signatures all match: | 12:13 | ||
:(array::intarray:D $: array::intarray:D $values, *%_) | |||
:(array::intarray:D $: @values, *%_) | |||
that's not a fix :| | |||
timotimo tries out "is default" | 12:15 | ||
hm, not yet available at that point, it seems | 12:16 | ||
or there's some other reason it explodes with "Cannot invoke this object (REPR: Null; VMNull)" | 12:17 | ||
Zoffix | timotimo: if the file you're editing above this line, then yes, that'd be the reason. github.com/rakudo/rakudo/blob/nom/...urces#L147 | 12:19 | |
(`is default` uses `does` op from that file) | 12:20 | ||
timotimo | ooooh! | ||
yes, that would explain it | |||
adding one candidate for each int kind we have ... not so great, but works | 12:22 | ||
Zoffix | m: multi x(Int) { say "here" }; multi x(Int) { say "there" }; BEGIN &x.candidates[1].^mixin: role { method default(--> True) { } }; x 42 | ||
camelia | there | ||
Zoffix | m: multi x(Int) { say "here" }; multi x(Int) { say "there" }; BEGIN &x.candidates[0].^mixin: role { method default(--> True) { } }; x 42 | 12:23 | |
camelia | here | ||
Zoffix | :) | ||
timotimo | cool. doing a whole bunch of memcpys now where the arrays are object arrays | 12:27 | |
perl6 -e '' does it 49 times for a total of 215 elements | 12:29 | ||
ah, it's not only about generational barriers, it's also about SC write barriers | |||
so this code would crash and burn if that has to happen but doesn't | |||
ah, ASSIGN_REF is really only about gen2 stuff, and if the root object is not in gen2 it's completely bypassed | 12:34 | ||
12:44
zakharyas joined
|
|||
timotimo | 403,273,321 vs 403,285,477 is what difference memcpy for object vmarrays brings for perl6 -e '' | 12:53 | |
hardly something to write home about, but it's nice to know it impacts it at least a tiny bit | 12:54 | ||
Geth | MoarVM: b9d3f6da34 | (Timo Paulssen)++ | src/6model/reprs/VMArray.c a fast path for VMArray splice using memcpy happens when both vmarrays are of the same slot type. also works with objects, but only if the array isn't in the old generation, because otherwise we'd have to go through all pointers and write-barrier them. |
13:05 | |
13:06
Geth joined
|
|||
timotimo | bbl | 13:06 | |
13:26
stmuk_ joined
13:44
rba joined
|
|||
timotimo | mhh, had some noms | 13:56 | |
14:17
dogbert2 joined
|
|||
jnthn | No time to read this at the moment but looks owrth a look: soft-dev.org/pubs/html/barrett_bolz...d_cold_v6/ | 14:48 | |
15:02
brrt joined
15:20
leont joined
|
|||
ugexe | jnthn: is async filesystem io not in moarvm because its not a high priority, or because it wouldn't be useful enough to warrant the complexity? | 15:50 | |
jnthn | ugexe: A bit of both and also a question of whether there's a win in doing it in the VM (more) | 16:04 | |
16:04
zakharyas joined
|
|||
jnthn | ugexe: In terms of priority: async sockets are really useful 'cus you tend to be juggling dozens of sockets and latencies are quite high, and async procs because you can't sanely collect stdout and stderr without some kind of async. File access has a bunch less latency. | 16:06 | |
moritz | ... unless you deal with slow NFS shares :/ | ||
jnthn | Well, there is that | 16:07 | |
Beyond that, I recall reading that libuv actually uses a thread pool behind the scenes to manage outstanding file I/O requests | |||
I don't know if that means that it just has threads doing "normal" blocking I/O, but if that's all it's doing, then we have a perfectly fine threadpool at Perl 6 level | 16:08 | ||
So stick a `start` before your I/O code and all's well | |||
I guess the other piece in this is that sockets and procs more naturally lend themselves to async handling, in that you want to react when something arrives. Files are usually processed as fast as we can grab the data and do stuff with it. | 16:11 | ||
16:11
ggoebel joined
|
|||
jnthn | Yeah, here's the quote I was thinking of, it's top of the doc: "The libuv filesystem operations are different from socket operations. Socket operations use the non-blocking operations provided by the operating system. Filesystem operations use blocking functions internally, but invoke these functions in a thread pool and notify watchers registered with the event loop when application interaction is required." | 16:13 | |
from nikhilm.github.io/uvbook/filesystem.html | |||
So that suggests there's no win from more complexity in the VM | 16:14 | ||
When we can do the same just fine at Perl 6 level | |||
ugexe | i wonder if that is due to difficulty in providing it cross platform, since I thought windows has decent async file system IO | ||
jnthn | Maybe, dunno | 16:17 | |
Maybe it's also a case of "possible but nobody wanted it enough" | |||
Are you asking out of curoisity, or out of having a use case? :) | 16:18 | ||
ugexe | curiosity mostly. having been digging around libuv IO stuff I started wondering how useful async mkdir could possibly be | 16:20 | |
timotimo | jnthn: cro is *really* unhappy with me using start it looks like | 16:21 | |
jnthn | ugexe: Probably more useful when you're in Node.js and have a single thread | 16:22 | |
timotimo: Huh? | |||
timotimo: It already runs every request handler inside of a start | |||
timotimo | should i be seeing output if i put a "start { say 'hi' }" after the $http.start (in the single page app example) | 16:23 | |
jnthn | I'd expect so | ||
timotimo | it's unreliable at best | 16:24 | |
jnthn | Are you running it under `cro run`? | ||
timotimo | i am | ||
jnthn | 'cus we don't have a way to allocate a ptty yet | ||
So output gets held back by buffering | |||
'cus the Proc::Async that runs it has an output pipe | |||
timotimo | oh, that's a good hint | ||
jnthn | I've been wondering what to do about that | 16:25 | |
timotimo | got an idea yet for "cro trace is nice, but not if you have bundled javascript to serve"? :) | ||
[Coke] | (ptty) hey, did you hear about the weird terminal state I got in using proc::async to run ssh -t? :) | ||
jnthn | No | 16:27 | |
timotimo: I was just going to truncate it after a certain number of bytes :) | |||
timotimo | that should work well | ||
jnthn | timotimo: It's an easy patch, I'd think | ||
[Coke]: fwiw, there's an SSH::LibSSH module :) | |||
16:28
zakharyas joined
16:36
rba_ joined
16:39
robertle joined
16:46
rba joined
17:03
zakharyas joined
17:17
domidumont joined
18:35
evalable6 joined
18:37
Ven joined
19:15
evalable6 joined
20:22
stmuk joined
20:57
stmuk_ joined
22:11
domidumont joined
22:36
leont joined
22:54
leont joined
23:25
leont joined
23:56
evalable6 joined
|