00:55
committable6 joined
01:52
ilbot3 joined
|
|||
samcv | was someone looking at trying to do heap profiling? | 01:52 | |
i think there are malloc replacements that can do stuff like that. i know jemalloc does | |||
MasterDuke | yeah, but the heap snapshot profiler actually gives you Rakudo-level info (e.g., number/size of types of objects) | 01:54 | |
samcv | ah nice | ||
03:30
AlexDaniel joined
05:30
ilbot3 joined
05:45
brrt joined
|
|||
brrt | good * | 05:46 | |
05:52
domidumont joined
05:59
domidumont joined
06:01
leont joined
06:16
geekosaur joined
06:43
lizmat_ joined
06:58
geekosaur joined
07:07
brrt joined
07:24
lizmat joined
|
|||
timotimo | yo | 07:52 | |
brrt | ohai timotimo | 08:02 | |
i've been worryingly unproductive the last few weeks :-o | |||
timotimo | you think so? | ||
brrt | yeah | ||
i mean, nothing got done, actually | |||
timotimo | i think you've been fine, whereas i have actually been worryingly unproductive during that time | 08:03 | |
brrt | well, except for merging the old changes, but these were old | ||
timotimo | it's still an important task | ||
samcv | ^ | 08:05 | |
08:06
robertle joined
|
|||
nine | That's one thing I have a quite hard time teaching my developers at work: unmerged work is completely useless. It has exactly 0 value to the user. So yes, going the final step of cleaning a branch and getting it through review is critical for turning time into value. | 08:06 | |
timotimo | i have many branches with unmerged work :( | ||
at least the one i'm working on right now is making nice progress towards being included for this release | 08:07 | ||
nine | timotimo: that's the irony. You've made great progress this time, yet feel unproductive. Just hang in, you're almost there :) | 08:08 | |
timotimo | oh, i was refering to the time before this work | 08:09 | |
i actually feel productive for a change | |||
samcv | i've been feeling fairly productive. technically not in my grant but i want to get anything i can that uses get_grapheme_at_nocheck to use the cached grapheme iterator but only for strands | 08:12 | |
08:12
AlexDaniel joined
|
|||
samcv | and i got it so that now if it requests a grapheme earlier than before, it will reinitialize the grapheme iterator | 08:12 | |
eventually we could probably make it so it works backwards as well instead of just forwards. and i already fixed a bug which made it not work for repeats unless it was starting at 0 | 08:13 | ||
feels awesome to get such great speed ups :) | |||
though since perl 6 flattens the strings before doing regex won't has as much impact i guess? | 08:14 | ||
timotimo | we could perhaps make "indexingoptimized" do something cleverer than just flattening the whole thing | 08:17 | |
if the performance impact from a few strands isn't too bad, we could probably keep a few in there | |||
where does your grapheme iterator cache live btw? | |||
brrt | ++ for a backwards-walking iterator | 08:18 | |
that seems like it ought to be totally doable | |||
samcv | timotimo, in iter.h | ||
yeah | 08:19 | ||
timotimo | i mean where is the cache installed? | ||
samcv | and probably easier now that i elimited a ton of loops | ||
well it's stored only in the function iterating | |||
timotimo | cool | ||
samcv | MVMGraphemeIter_cached which stores the cached grapheme, the last requested index, and the grapheme iterator | ||
and so either does MVM_string_gi_get_grapheme, or returns the cached grapheme, or moves it and returns it. or reiniatizes it and then moves and returns | 08:20 | ||
allowing it to save lots of work for strands | |||
timotimo | the GraphemeIter has a GraphemeIter in itself? | ||
samcv | timotimo, github.com/samcv/MoarVM/blob/ignor...#L308-L328 | 08:21 | |
scroll up a tad and you will see the struct for the type and the initializer | |||
timotimo | looks good | 08:24 | |
the name is strange, but i suppose it's okay since it's not public API | |||
samcv | yeah | ||
if you use it for flat strings it will slow it down, so it shouldn't be used everywhere | 08:25 | ||
i mean not *that* much. but like at least 20% | |||
and 2x speedup for indexic when it's made up of 10 strands | 08:26 | ||
which is impressive | |||
timotimo | nice | 08:27 | |
brrt | can i ask the stupid question why the cache is in the data structure and not in the caller | 08:30 | |
08:30
zakharyas joined
|
|||
samcv | it is in the caller | 08:32 | |
well you declare `MVMGraphemeIter_cached gic` and then initialize it | |||
so it's stored on the stack in the caller | 08:33 | ||
or do you mean some other thing? | |||
brrt | hmm, i see what you mean, i guess | 08:42 | |
but doesn't the iterator store its current position? | 08:43 | ||
timotimo | it stores a MVMStringIndex "pos" | 08:46 | |
jnthn | morning o/ | 08:48 | |
brrt | morning | ||
so, what the 'cached' structure adds, is a cached last grapheme? | 08:49 | ||
that does make some sense, yes | |||
08:50
lizmat joined
|
|||
timotimo | i need a different hex editor than okteta ... | 08:53 | |
jnthn | Coming back to something the day after is usually a good idea. :) I realized that there is a fairly cheap way to find which inline we're in: search BBs looking just at their first instruction and seeing if it has an inline annotation, then we know that's where that inline starts | 08:56 | |
Or ends, for that matter | |||
nine | That sounds almost too easy :D | ||
08:57
Skarsnik joined
|
|||
timotimo | tried bless, can't even tell it to limit display to n columns | 08:57 | |
jnthn | Ends is cheaper | 08:58 | |
'cus ->linear_next | 08:59 | ||
Though means you need to pick the max inline end annotation to know the innermost one you're in | |||
timotimo | hte doesn't support 64bit integers ;_; | 09:01 | |
samcv | brrt, it doesn't store the position it stores just which strand and position in strand and how many repeats left in the particular strand it's on | 09:08 | |
brrt | timotimo: hexl-mode? | ||
i see | 09:09 | ||
timotimo | brrt: i'm not comfortable with emacs, though | 09:10 | |
brrt | well, that's why you'll always be on the lookout for the next tool :-P | 09:12 | |
timotimo | heh. | ||
things that frustrate me in okteta: | 09:13 | ||
can't display the offset in decimal (though i could instead output all numbers from the program in hex instead) | |||
can't move backwards/forwards in 64bit increments | |||
that's more or less it i suppose | 09:14 | ||
also, defining structs is a major song-and-dance where you have to create a .desktop file and javascript or xml by hand | 09:15 | ||
09:17
brrt1 joined
|
|||
timotimo | it doesn't even have an entropy overview | 09:18 | |
m(, you can't click "find previous" right after you clicked "find next" because it puts the cursor after the match and doesn't know to skip the one right before the cursor | 09:21 | ||
09:23
AlexDaniel joined
|
|||
jnthn | Well, seems my inline check thingy does the job | 09:30 | |
Or at least, NQP builds | |||
09:37
lizmat joined
09:38
lizmat_ joined
|
|||
Skarsnik | hm no way to get a bt in gdb when moar panic? | 09:39 | |
timotimo | sure there is | ||
break MVM_panic | |||
Skarsnik | but I can't use the perl6-gdb-m script? | 09:40 | |
09:40
lizmat__ joined
|
|||
timotimo | of course you can | 09:41 | |
jnthn | You can, just set the breakpoint and hit r to run it again | ||
timotimo | just have to ctrl-c early so you can set the breakpoint | ||
jnthn | Or that :) | ||
timotimo | or what jnthn said | ||
09:43
lizmat joined
|
|||
Geth | MoarVM: 234f5a8ce7 | (Jonathan Worthington)++ | src/spesh/optimize.c Take more care when eliminating set inside inline If we have something in an inline like: sp_fastinvoke_o r100(1), ... set r10(3), r100(1) Where r10(3) is outside of the inline, then we must not rewrite it to: sp_fastinvoke r10(3), ... Because it will break deoptimization (specifically, uninlining), which assumes the value will be returned into the frame itself. |
09:43 | |
Skarsnik | well that does not real help me figuring rt.perl.org/Ticket/Display.html?id=131003 :( | 09:47 | |
I ran it with libefence but after 24h it was still running x) | |||
Geth | MoarVM: 6048ac75e5 | (Jonathan Worthington)++ | 3 files Re-compute dominance tree before stage 2 opts. This means that they will be able to reach inside of inlines also. |
09:50 | |
jnthn | Finally, we can do that. | 09:51 | |
timotimo | yeessssss | 09:56 | |
Skarsnik | 14166 skarsnik 20 0 2526m 1.2g 19m R 99.9 31.5 4:18.58 moar 1-2.5Gb of ram with efence lol | 10:01 | |
dogbert17 | t/04-nativecall/21-callback-other-thread.t doesn't work very well all of a sudden | 10:03 | |
brrt1 wonders if there isn't a way to deoptimization 'cleverer' | |||
timotimo | there is | ||
remember my "deopt bridge" idea? | 10:04 | ||
brrt | yes, i do renember that | ||
why don't you implement it :-P | |||
or maybe revive it | |||
jnthn | There's various things we can do, but given the complexity of where we're at is already pretty high... | ||
brrt | that is true | 10:05 | |
dogbert17 | ==9822== Thread 2: | 10:10 | |
==9822== Invalid read of size 2 | |||
==9822== at 0x41A7731: within_inline (optimize.c:2252) | |||
==9822== by 0x41A7977: try_eliminate_set (optimize.c:2301) | |||
==9822== by 0x41A7A59: second_pass (optimize.c:2318) | |||
jnthn | m: say 0.67 / 0.70 | 10:11 | |
camelia | 0.957143 | ||
jnthn | dogbert17: grr, how'd you get that? | 10:12 | |
dogbert17 | when I run make t/04-nativecall/21-callback-other-thread.t | ||
with MVM_SPESH_NODELAY=1 | 10:13 | ||
nine | 4.962s! That's the first time I've seen csv-ip5xs doing 100_000 iterations in less than 5 seconds | 10:18 | |
jnthn++ | |||
jnthn | huh, I thought you were the one making that faster? ;) | 10:19 | |
Some of my recent changes coulda though :) | |||
nine | Well, Inline::Perl5 is still a whole lot Perl 6 code ;) | ||
jnthn | Spesh is gradually getting smarter | ||
timotimo | once you stomp out the dumb i put in it :D | 10:20 | |
jnthn | The 0.67 / 0.70 I just did above, btw, is for `for ^1000000 { for 1..5 { last } }` | ||
nine | And I've seen lots of unnecessary sets in the MAST compiled bodies of native subs. | ||
jnthn | The 4% or so win coming from making the rewriting of control exception throws into gotos work with inlines | 10:21 | |
Which means that opt is now useful for Perl 6 code | |||
(Because the throw is always in a multi candidate of next/last/redo) | |||
So the last actually after optimization becomes a goto in that case now | 10:22 | ||
timotimo | lovely | ||
"next" is something Text::CSV has a lot of | 10:23 | ||
jnthn | Yeah, though hard to say whether its stuff falls within inline limits, but perhaps so :) | ||
timotimo | aye | ||
if not, we have to make the resulting code even smaller :) | |||
jnthn | ah darn, SEGV when building CORE.setting :( | 10:24 | |
nine | Now only a 27.4x slow down compared to pure Perl 5 | 10:26 | |
Last number I posted was 33 I think | 10:27 | ||
jnthn | Ah, SEGV is the one dogbert17 found with valgrind | 10:28 | |
But what on earth... | |||
Ah, hmm | 10:29 | ||
10:38
brrt joined
|
|||
jnthn | Think I got a fix | 10:39 | |
Showed up in multi-level inlines | |||
brrt | i'm curious | 10:48 | |
timotimo runs a heap snapshot under valgrind | 10:50 | ||
Geth | MoarVM: 263984f566 | (Jonathan Worthington)++ | 3 files Add a reliable locals count for inlines When we do a multi-level inline, then we won't have a spesh graph for the nested inline, and so cannot simply ask it for its locals count. The static frame is also not reliable for this purpose, as the inline may itself have inlines. Thus just add a field holding the value, so we reliably have it available. |
||
timotimo | having a "skip this many bytes after each snapshot" piece in the index lets me figure out proper incremental pieces for aborted heapsnapshots after the code has been merged | 10:54 | |
10:57
releasable6 joined
10:58
mtj_ joined
|
|||
Geth | MoarVM: 04f6be4ff6 | (Jonathan Worthington)++ | src/spesh/optimize.c Move throw -> goto optimization to second pass. Meaning that we can now apply it to inlined code. Also fix it up to cope with being used across inlines. This means that Perl 6 next, last, and redo may potentially be rewritten into gotos (in Perl 6, throws of control exceptions are always inside of multi candidates, which we've been able to inline for a while, but could not rewrite to a goto since the optimization didn't work across inlines until now). |
10:58 | |
jnthn | There we go | 11:00 | |
11:00
AlexDaniel joined
|
|||
timotimo | jnthn: can you think of any particular things i should test before merging the new heapsnapshot format? | 11:01 | |
jnthn | timotimo: Do all the moar-ha commands work? :) | ||
Including path... :) | |||
timotimo | yup | 11:02 | |
jnthn | And try it on a multi-threaded program | ||
Shouldn't be an issue | |||
But good to be sure | |||
timotimo | hm, i can no longer directly compare the old and new format of the very same program | ||
11:03
stmuk_ joined
|
|||
timotimo | i don't think we can run sufficiently deterministic for heap snapshots to come out the same | 11:06 | |
==7103== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) | 11:07 | ||
jnthn | Yeah, but can check the output looks sensible :) | ||
lunch, bbiab | |||
timotimo | it always looks sensible :D | 11:08 | |
oh amusing, by entering "path random-number" i got one containing a frame from OO::Monitors | |||
hah, running a multithreaded program in valgrind while it heapsnapshots is no use, as valgrind serializes threads | 11:13 | ||
but at least there'll be multiple heaps | 11:14 | ||
jnthn: how terrible is your hack to support "ignore inter-gen roots unless absolutely necessary"? | 11:15 | ||
looks good so far | 11:28 | ||
the async example from Terminal::Print takes 460744maxresidentk with the new heap snapshot output format | 11:29 | ||
now i'll recompile master and see how well that does | |||
Geth | MoarVM/heapsnapshot_binary_format: c3c1926bf8 | (Timo Paulssen)++ | 2 files write parts of types/static frames/strings incrementally so in case the program crashes before it can finish up, we can still recover everything we need. |
11:31 | |
timotimo | oh, left some debug output in there | ||
i'll amend the commit later | |||
whoops, i need the fix from my branch, too | 11:36 | ||
1971868maxresident | 11:37 | ||
m: say 460744 / 1971868 | |||
camelia | 0.2336586 | ||
timotimo | nice. | ||
and that only grows as runs go longer | |||
Geth | MoarVM/heapsnapshot_binary_format: fbd618283e | (Timo Paulssen)++ | 2 files write parts of types/static frames/strings incrementally so in case the program crashes before it can finish up, we can still recover everything we need. |
11:38 | |
timotimo | jnthn: can has access to github.com/jnthn/p6-app-moarvm-heapanalyzer ? | 11:47 | |
otherwise you'll have to review a pull request :P | 11:55 | ||
jnthn | timotimo: Sure | 12:06 | |
timotimo | now i'm writing a nice description for the pull request, but i can just merge it myself :) | ||
jnthn | timotimo: Invite sent | 12:07 | |
timotimo | accepted | ||
one thing i haven't figured out yet (which i just noticed) is that if we start a heap snapshot collection from gdb or something, we don't have a very good spot to finish up the heap snapshot? or can we literally just call into the "end profiling" sub at any point? | 12:13 | ||
i suppose we probably can | |||
Geth | MoarVM/master: 11 commits pushed by (Timo Paulssen)++ review: github.com/MoarVM/MoarVM/compare/0...de6dceda81 |
12:14 | |
timotimo | \o/ | ||
nine | timotimo: congratulations! | 12:24 | |
Seriously cool work :) | |||
timotimo | thanks! | ||
jnthn | So this gets me...faster heap profiler? :) | ||
timotimo | there's lots of next steps | ||
jnthn | And less memory overhead when profiling? | ||
timotimo | only the loading of snapshots is faster. but it's loads faster | ||
jnthn | Yeah, that's the part I was thinking of :) | ||
timotimo | yes, the memory growth of the program under profile is seriously reduced | ||
jnthn | That's the slow part, the actual snapshotting wasn't bad already | 12:25 | |
nine | Faster and you get output on interrupted programs. That wold already have been soo helpful for me | ||
jnthn | Very nice | ||
timotimo++ | |||
timotimo | well ... the parser can't yet handle the output from interrupted programs | ||
Zoffix | timotimo++ | ||
timotimo | next up is maybe calculating incident edges so we can quickly query "what points at this object?" | 12:26 | |
"Hey, what points at the Str STable?" | |||
"here's your five million objects" | |||
nine | Rather...what the hell points at the A STable, causing a repossession conflict? | 12:27 | |
timotimo | :) | 12:31 | |
12:47
zakharyas joined
|
|||
timotimo | and of course at one point there'll be a nicer interface than the cli | 12:53 | |
13:01
AlexDaniel joined
13:02
travis-ci joined
|
|||
travis-ci | MoarVM build failed. Jonathan Worthington 'Re-compute dominance tree before stage 2 opts. | 13:02 | |
travis-ci.org/MoarVM/MoarVM/builds/272413799 github.com/MoarVM/MoarVM/compare/2...48ac75e5a7 | |||
13:02
travis-ci left
|
|||
jnthn | bah, empty log: travis-ci.org/MoarVM/MoarVM/jobs/272413811 | 13:03 | |
13:21
Skarsnik_ joined
|
|||
nwc10 | ASAN doesn't seem to make any comment | 13:22 | |
(not totally confident whether some ASAN errors are eaten up as "regular" test failures, and there are regular test failures under ASAN when run in parallel due to timing "issues") | |||
13:25
squashable6 joined
|
|||
timotimo | oops, i haven't pushed the necessary nqp changes for the heap snapshot profilr | 13:40 | |
there it is | 13:42 | ||
dogbert17 | jnthn: what's next on your fix list? | 13:49 | |
brrt | timotim++ | 13:50 | |
dogbert17 | c: HEAD say "Test" | 13:52 | |
meh | |||
committable6 | dogbert17, Ā¦HEAD(80a3255): Ā«TestĀ» | ||
dogbert17 | yay | ||
nwc10 | fixing up a curry? | ||
dogbert17 | c: MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1 HEAD use Test; { { my sub foo { fail }(); CATCH { default { my $bt = .backtrace; is $bt.list.elems, 4, "we correctly have 4 elements in .list"; }} } } # bug | 13:53 | |
committable6 | dogbert17, Ā¦HEAD(80a3255): Ā«not ok 1 - we correctly have 4 elements in .listā¤# Failed test 'we correctly have 4 elements in .list'ā¤# at /tmp/tJus6tS0Qr line 1ā¤# expected: '4'ā¤# got: '3' Ā«exit code = 1Ā»Ā» | ||
jnthn | That one is simply that when spesh inlines things, the backtraces don't know the inlinees. | 13:54 | |
dogbert17 | aha | ||
jnthn | And...not quite so simple to fix | ||
Though we should do it at some point | |||
dogbert17 | hardly a disastrous problem | 13:55 | |
problem is; I'm running out of spesh problems :) | |||
jnthn | heh | ||
timotimo has the first rough draft of incidents searching | 13:59 | ||
doesn't seem quite right, though | 14:00 | ||
jnthn | Seems I've managed to eat my way through my available Perl 6 funding for the moment (though hopefully a bit more coming soon to do some more concurrency stuff). So for now I'm back to what can I do in my copious free time, and should probably look at other $dayjob things for a bit. I'm also impressively behind on blogging... | ||
nwc10 | blogging++ | 14:03 | |
6guts.wordpress.com/2017/08/06/moa...ring-data/ -- You wonāt believe what happens next! | |||
jnthn | haha | 14:04 | |
timotimo | ouch! :) | 14:10 | |
jnthn | The next bit is a shorter post, 'cus it's about the planner :) | ||
timotimo | "i've had this post planned for a whole month now!" | ||
jnthn | I got distracted | 14:11 | |
Soemthing involving a cro and a conference... | |||
Apparently a bit of vacation too :) | 14:12 | ||
14:14
bisectable6 joined
|
|||
timotimo | yeah! it works \o/ | 14:16 | |
14:18
robertle joined
|
|||
timotimo | nine: i just pushed support for an "incidents" command to the heapanalyzer | 14:20 | |
please test it | |||
14:21
nwc10 joined
14:36
zakharyas joined
14:43
ilbot3 joined
14:49
ilbot3 joined
15:04
brrt joined
|
|||
samcv | timotimo, is the heap snapshot analyzer going to get documentation? | 16:05 | |
yoleaux | 12:34Z <[Coke]> samcv: the class definition you give for utf8 in the docs doesn't compile. | ||
samcv | .tell [Coke] ok i made a change, hopefully this will fix | 16:09 | |
yoleaux | samcv: I'll pass your message to [Coke]. | ||
16:13
dogbert2 joined
16:29
AlexDaniel joined
|
|||
timotimo | samcv: sure, what angle should i approach it from, and where should i put it? | 17:08 | |
samcv | how to use it. how it works. | 17:09 | |
in the docs folder. and if it's current, add it to the README.md in the docs folder too | |||
the readme used to say that it was all likely outdated, but when i made my string documentation i linked to that and said that was current. so if you add current stuff that's not just design docs add to that list | 17:10 | ||
also will be away for an hour or so | |||
timotimo | mhm | 17:11 | |
though perhaps the docs should go to the performance section of the perl6 docs or the readme and/or wiki of the heap snapshot analyzer instead? | |||
Skarsnik | should probably be a page with everything that help debug stuff x) | 17:19 | |
timotimo | the "performance" page already talks about the regular profiler | 17:59 | |
Geth | MoarVM/jit_nativecall: 16 commits pushed by (Stefan Seifert)++ review: github.com/MoarVM/MoarVM/compare/5...19209de674 |
18:04 | |
18:18
leont joined
|
|||
Geth | MoarVM: niner++ created pull request #676: JIT compile native calls |
18:20 | |
18:31
zakharyas joined
|
|||
jnthn | I see code to review... | 18:38 | |
uh...ltos of it given I'm meant to also look at the expr JIT :) | |||
dogbert2 | m: my int $master = 0; await start { if cas($master, 0, 1) == 0 { say "Master!" }} xx 4 # this code fails on 32 bit systems, is that expected? | 18:44 | |
camelia | Master! | ||
jnthn | Yes | 18:45 | |
That's precisely why the atomicint type exists | 18:46 | ||
dogbert2 | cool, I took it from the atomicint documentation should a comment be added or should int be changed to atomicint? | 18:47 | |
jnthn | Should be changed to atomicint :) | ||
Nice catch | |||
dogbert2 | should I fix it? | 18:48 | |
moritz | this is rather unperlish | ||
normally the operation dictates the semantics, not the type | |||
an atomic operation being silently unatomic just because its operand is unatomic seems dangerous | 18:49 | ||
jnthn | Why are you assuming it's silently unatomic? | 18:50 | |
m: my int32 $master = 0; await start { if cas($master, 0, 1) == 0 { say "Master!" }} xx 4 | 18:51 | ||
camelia | Tried to get the result of a broken Promise in block <unit> at <tmp> line 1 Original exception: Cannot atomic load from an integer lexical not of the machine's native size in block at <tmp> line 1 |
||
jnthn | It does that | ||
Very clear failure | |||
moritz | probably because I misinterprted dogbert2's "fails" as "gives wrong result" | ||
sorry | |||
jnthn | Ah :) | ||
dogbert2 | my bad, I could have been more explicit :) | ||
example fixed | 18:55 | ||
next question, in the last few weeks 'make spectest' has become alot slower 20-25 percent in my case. Any theories as to why? | 19:05 | ||
Zoffix didn't notice any slowdown in `make stresstest` | 19:08 | ||
But perhaps the new unicode tests are to blame? | |||
dogbert2 | perhaps, considering trying to figure out when this happened | 19:09 | |
compare these two for example: | 19:12 | ||
irclog.perlgeek.de/perl6-dev/2017-...i_14918057 | |||
irclog.perlgeek.de/perl6-dev/2017-...i_15123112 | 19:13 | ||
there's one large drop between July 26th and 31st, from ~220 secs to ~260 secs | 19:16 | ||
20:07
bisectable6 joined,
committable6 joined
20:13
committable6 joined
20:38
brrt joined
21:06
zakharyas joined
|
|||
lizmat | jnthn: given a native descriptor, is there a way to create a PIO out of that ? | 21:07 | |
or: is there a way to close by a given native descriptor (fileno) ? | 21:08 | ||
so one wouldn't need to keep the PIO around, but just the fileno (native desc) | 21:09 | ||
samcv | Zoffix, what new unicode tests? | 21:22 | |
brrt | what's PIO? parallel? perl-level? | ||
lizmat | historically Parrot IO Object I believe | 21:23 | |
basically the VM's handle of a file | |||
geekosaur | I'd be surprised if that exists usefully | 21:26 | |
even C doesn't support it these days | |||
Zoffix | samcv: no idea. It was just a guess. I recall last time I noticed slowdown was due to new unicode tests, so I figured maybe dogbert2 was seeing that same slowdown | ||
geekosaur | (that is, you can make a new stdio file object from a descriptor, but you can't find an existing one that way) | ||
and if this is related to buffering, you need to find the existing one, not make a new one | 21:27 | ||
21:43
lizmat joined
|
|||
jnthn | lizmat: No | 21:48 | |
yoleaux | 20:42Z <Zoffix> jnthn: are return type checks in protos simply NYI or are they not meant to work? This doesn't die: class { proto method f(--> Int) {*}; multi method f { "foo!" } }.new.f.say | ||
jnthn | lizmat: And it wouldn't helpanyway | 21:49 | |
lizmat: Since your goal is to get the handled flushed on close, and the buffer hangs off the handle | |||
lizmat | jnthn: well, then I guess option #2 is not an option, as it thoroughly breaks the lock tests :-( | ||
jnthn | Optin 2 is what we have now, I thought you were implementing option 1? | ||
lizmat | no, vv | ||
jnthn | (Keep a list of open handles) | ||
brrt | is there anything against doing that on the perl level? | 21:50 | |
lizmat | argh, yeah, you're right | ||
option #2 is what we have, option #1 breaks the locking tests | |||
jnthn | brrt: Keeping the list? That's the approach I've suggested, and that I know lizmat++ is working on | ||
The...locking tests?! | |||
lizmat | S32-io/lock.t | 21:51 | |
jnthn | Details? | ||
Oh, *file* locking | |||
lizmat | hangs on the first subtest of test 6 | ||
jnthn | I thought you meant S17-lowlevel/lock.t ;) | ||
Shoulda guessed but, well, that's what tiredness does | |||
That's...odd | |||
Any idea why it hangs? | 21:52 | ||
I mean, we shouldn't be doing anything different until exit | |||
lizmat | gist.github.com/lizmat/001726b47eb...498f3fda7d | ||
my work so far | |||
jnthn | Oooking | ||
uh, Looking | |||
Heh, binding into the array on fileno | 21:53 | ||
Cute. I hadn't thought of that and was figuring it would be an O(n) search... | |||
Or a hash on fileno | |||
I guess maybe the hash has slightly better properties of fd re-use doesn't happen | |||
lizmat | I thought we could spare the few bytes on that array | ||
jnthn | But anyway, that won't be the issue here | ||
Hmm, odd, don't immediately see how that'd cause issues | 21:56 | ||
oh but | |||
It's not threadsafe | |||
lizmat | but? | ||
jnthn | If handles are opened on different threads | ||
Then there's a race on $opened | |||
Since it's a dynamic array | |||
Dunno if that's *the* problem, but it's certainly *a* problem | |||
lizmat | aaahh... hmmm | 21:57 | |
jnthn | Would need a lock to protect it | ||
lizmat | so can we find out max fileno ? | ||
well, I was trying to do this without a lock | |||
assuming the fileno's would not be duped across threads | |||
timotimo | the user is free to raise the maximum fileno by quite a bit | ||
lizmat | so we only need to make sure the array has the max fileno size\ | ||
jnthn | Also the closing thread != the opening thread | 21:58 | |
brrt | that assumption is valid, but you might get concurrent resizes, i think | ||
jnthn | Or doesn't have to be | ||
Yeah, concurrent resizes might be what's messing this up at the moment | |||
Just lock it | |||
It's not that expensive. | |||
lizmat | ok, will try that | ||
timotimo | do we only need to store "is opened?" for every fileno? | ||
jnthn | In the scheme of things | ||
timotimo | we might want to store one per bit rather than one per byte or worse | ||
jnthn | timotimo: No, we need to keep track of the handles that are open, so we can close/flush them | ||
brrt | opening a file is already expensive anyway ? | ||
timotimo | ah, we need to know about the buffer, too | 21:59 | |
jnthn | Right | ||
I have no idea if fds get reused if you open lots of files over the lifetime of a process? | |||
(open/close, I mean) | |||
timotimo | no clue where to look to find out what subset of our target platforms would make such a guarantee | 22:00 | |
but i'd assume re-use would happen eventually | |||
jnthn | Yeah, depends how eventually | 22:01 | |
If we're locking anyway then we could use a hash | |||
And delete the key | |||
lizmat | but re-use would not be an issue | ||
jnthn | Would force stringifying the fds is all | ||
lizmat | because that would imply the old one got closed, so the entry would be nulled in the list | ||
jnthn | lizmat: Yeah, my worry is more that if they're not re-used and it's a long-lived program, then we could read fd 100000 or something | 22:02 | |
22:02
dogbert2 joined
|
|||
jnthn | Which sure is "only" 800 KB | 22:02 | |
s/read/reach/ | |||
timotimo | the default max fileno on my system is 1024, fwiw | ||
but remember that we also blow filenos with other things, like libuv related pieces for the event loop | 22:03 | ||
jnthn | ah, ok | 22:04 | |
timotimo | that's why there was this bug where creating a crapton of threads would segfault | ||
we didn't check the return value of creating uv eventloops | |||
lizmat | ok, I've not tried it with locking just yet, just by setting the list size to 1K | 22:05 | |
and it still hangs | |||
lizmat is getting tired so will look at this tomorrow with some fresh eyes | |||
timotimo is coming down with a little cold | 22:06 | ||
jnthn | sleep++ | ||
Geth | MoarVM: samcv++ created pull request #677: Optimize MVM_string_gi_move_to and use for index* ops |
22:35 | |
Zoffix | .tell lizmat did a bit of debugging of the lock hanging issue. Golfed version is start `./perl6 -e '"foo".IO.open(:w).lock; sleep'` in one terminal. In another terminal in same dir run `./perl6 -e 'start { "foo".IO.open(:w).lock }; sleep 1'` and the second one'll never quit after 1 second, despite the lock running in a separate Promise. Hope that helps :) | 22:55 | |
yoleaux | Zoffix: I'll pass your message to lizmat. | ||
jnthn | Oh, that makes sense | 22:56 | |
Operations on a syncfile are protected by a mutex | 22:57 | ||
The .lock operation will be holding it | |||
The .close operation won't be abl to get it | |||
Zoffix | Oh | ||
jnthn | *able | ||
Zoffix | .tell lizmat jnthn++ figured out the cause: irclog.perlgeek.de/moarvm/2017-09-06#i_15127594 | ||
yoleaux | Zoffix: I'll pass your message to lizmat. | ||
jnthn | The point of the mutex being to protect a file handle data structure when concurrent threads attempt to use it, and thus not explode violently | 22:58 | |
I'm too tired to have any useful thoughts beyond knowing that's probably what's going on, alas | 22:59 | ||
Is the point of the test to check we can't acquire the lock? | 23:01 | ||
Zoffix | Yes. | 23:02 | |
jnthn | Hmm | ||
I guess if we're willing to change the test then we could make the close-at-exit thing optional make the .open call pass :!close-at-exit or so | |||
Zoffix | It checks whether .lock blocks (printing some stuff after the .lock line) in that case and because it runs in a promise it exists after 1 second and the test just checks the stuff after the .lock line never got printed | 23:03 | |
FWIW none of the .lock/.unlock tests are part of 6.c-errata so I'd guess there's more freedom to alter them | |||
jnthn | I guess it hinges on if we think anyone would write such a program in a non-test situation | 23:05 | |
They potentially would | |||
If they wanted to try and acquire the log, but whine/exit if they failed | 23:07 | ||
And we'd block their exit | |||
I think we should provide a :!close-at-exit either way for folks who run into other situations we don't discover ourselves | 23:08 | ||
If we want to magic it away, we could remove the file from the auto-close list while attempting to acquire the lock, and put it back in once we have | |||
'night, #moarvm | 23:14 | ||
timotimo | night jnthn | ||
23:20
travis-ci joined
|
|||
travis-ci | MoarVM build failed. Jonathan Worthington 'Move throw -> goto optimization to second pass. | 23:20 | |
travis-ci.org/MoarVM/MoarVM/builds/272433122 github.com/MoarVM/MoarVM/compare/2...f6be4ff6d8 | |||
23:20
travis-ci left
23:28
nwc10 joined
23:40
AlexDaniel joined
|