Perl 6 language and compiler development | Logs at colabti.org/irclogger/irclogger_log/perl6-dev | For toolchain/installation stuff see #perl6-toolchain | For MoarVM see #moarvm
Set by Zoffix on 27 July 2018.
00:30 patrickz left 01:13 AlexDaniel left 03:19 dalek left
japhb jnthn: Seeing a weird slowdown when run()'ing many child processes (each one getting .spurt(:close)'ed some input, but out/err left unopened). Even though I close the input after spurt, and my reading of the Proc code indicates it should await the process when the last pipe handle closes, it *appears* like all these Proc's are making rakudo fall over. (I don't see the subprocesses using any CPU in top, 05:02
just the perl6 process chewing all my cores.)
Any idea what I'm missing? Is there a way to force Proc to await the child and release all resources/pipes/taps/etc.? 05:03
FWIW, I simplified the child processes to just 'echo foo bar baz', since echo accepts but ignores its input, and the problem remains. 05:05
timotimo japhb: any help from strace with its follow-children mode, or with perf top or record/report? 06:23
yoleaux 02:34Z <MasterDuke> timotimo: i just tried profiling lizmat's code in github.com/rakudo/rakudo/issues/1421 and it's saying `push SETTING::native_array:682` isn't getting jitted. but a spesh log just shows two `# expr bail: Cannot handle DEOPT_ONE (ins=sp_guard)` for the after of that function. any idea what's up?
06:59 Geth left, synopsebot left, dalek joined, ChanServ sets mode: +v dalek, Geth joined, synopsebot joined, ChanServ sets mode: +v Geth, ChanServ sets mode: +v synopsebot, p6lert joined 07:42 patrickb joined 07:51 ufobat_ joined
lizmat Files=1259, Tests=98237, 396 wallclock secs (25.15 usr 7.16 sys + 2834.53 cusr 242.17 csys = 3109.01 CPU) 08:31
Geth rakudo: 194e37788f | (Ben Davies)++ | 3 files
Don't use nqp::execname on OpenBSD

Fixess one of two parts to #2824
08:35
rakudo: cec0839fa6 | (Elizabeth Mattijsen)++ (committed using GitHub Web editor) | 3 files
Merge pull request #2845 from Kaiepi/executable

Don't use nqp::execname on OpenBSD
synopsebot RAKUDO#2824 [open]: github.com/rakudo/rakudo/issues/2824 [BLOCKER][BSD] Cannot install zef (or run much of anything without MVM_JIT_DISABLE)
roast: a675a0a1b2 | (Elizabeth Mattijsen)++ | S32-str/sprintf-u.t
Parameterize sprintf testing of "u" format

Mostly intended to register the behaviour of the current sprintf implementation, which was not testing some combinations of flags and values.
09:18
rakudo: bdf6f151db | (Elizabeth Mattijsen)++ | t/spectest.data
Add extensive testing of "u" format in sprintf
09:20
lizmat m: dd PROCESS::.keys 09:22
evalable6 ("\$SCHEDULER", "\$OUT", "\$AWAITER", "\$SPEC", "\$ERR", "\&chdir", "\$PID", "\%ENV", "\$IN").Seq
lizmat m: PROCESS:<$*FOO> = 42; say $*FOO 09:25
evalable6 (exit code 1) 04===SORRY!04=== Error while compiling /tmp/8PkLPDv7RG
Precediā€¦
09:26
lizmat, Full output: gist.github.com/b7f6698fe1796f428b...76c7d5f197
lizmat m: PROCESS::<$*FOO> = 42; say $*FOO
evalable6 (exit code 1) Dynamic variable $*FOO not found
in block <unit> at /tmp/V44G1Licum line 1
SmokeMachine m: PROCESS::<$FOO> = 42; say $*FOO
evalable6 42
lizmat indeed 09:27
SmokeMachine lizmat: now I got it! thanks! 09:28
m: $PROCESS::FOO = 42; say $*FOO 09:33
evalable6 42
|Tux| Rakudo version 2019.03.1-216-gbdf6f151d - MoarVM version 2019.03-76-gc10fee65c
csv-ip5xs0.703 - 0.729
csv-ip5xs-206.020 - 6.022
csv-parser21.582 - 23.107
csv-test-xs-200.437 - 0.441
test6.979 - 7.515
test-t1.630 - 1.732
test-t --race0.855 - 0.858
test-t-2027.776 - 29.021
test-t-20 --race9.947 - 10.565
10:19
20190416 1.666ā™20180928 1.662ā™20181015 1.659ā™20181011 1.647ā™20190417 1.630ā™
pompomtiedom
lizmat whee! 10:22
|Tux| twitter.com/Tux5/status/1118459720623435776
jnthn Nice :) 10:23
lizmat [Tux]: have you tried cleaning out the .precomp dirs for an additional gain ?
|Tux| yes: tux.nl/Files/rebuild 10:24
lizmat okido :-) 10:25
m: dd sprintf('%-08.2f,2.71) # huh ? 10:40
evalable6 (exit code 1) 04===SORRY!04=== Error while compiling /tmp/5haTnRHBvz
Unable ā€¦
lizmat, Full output: gist.github.com/6aeb4ebda3901ccb40...dd57cd70ee
lizmat m: dd sprintf("'%-08.2f",2.71) # huh ?
evalable6 "'0002.710"
lizmat 3 digits ??
m: dd sprintf("'%-08.2f",3.1415) # huh ?
evalable6 "'0003.140"
lizmat m: dd sprintf("'%-08.2f",3.145) # huh ?
evalable6 "'0003.150"
10:52 benchable6 left 10:53 benchable6 joined, ChanServ sets mode: +v benchable6
Geth roast: 95322a5d18 | (Elizabeth Mattijsen)++ | S32-str/sprintf-f.t
Parameterize sprintf testing of "f" format

THIS IS TESTING FOR THE CURRENT BROKEN BEHAVIOUR OF sprintf("%f"), while we hopefully decide on a way forward:
   github.com/perl6/problem-solving/issues/11
10:53
rakudo: d87aec38f4 | (Elizabeth Mattijsen)++ | t/spectest.data
Add extensive testing of "f" format in sprintf
10:55
releasable6 Next release in ā‰ˆ3 days and ā‰ˆ7 hours. 12 blockers. Please log your changes in the ChangeLog: github.com/rakudo/rakudo/wiki/ChangeLog-Draft 11:00
11:03 benchable6 left, benchable6 joined, ChanServ sets mode: +v benchable6 11:16 travis-ci joined
travis-ci Rakudo build failed. Elizabeth Mattijsen 'Add extensive testing of "u" format in sprintf' 11:16
travis-ci.org/rakudo/rakudo/builds/521161365 github.com/rakudo/rakudo/compare/c...f6f151dbe8
11:16 travis-ci left
Guest2775 lizmat: test.c:4:12: warning: '0' flag ignored with '-' flag in gnu_printf format [-Wformat=] printf("'%-08.2f",2.71); 12:33
result is '2.71 plus 4 spaces at the end 12:34
Geth roast: a12713caea | (Elizabeth Mattijsen)++ | S32-str/sprintf-e.t
Parameterize sprintf testing of "e" format

Mostly intended to register the behaviour of the current sprintf implementation, which was not testing some combinations of flags and values.
13:13
rakudo: 298c313c1d | (Elizabeth Mattijsen)++ | t/spectest.data
Add extensive testing of "e" format in sprintf
lizmat m: use Test; ok sprintf('%20.2e', 3.1415) eq ' 3.14e+000' | ' 3.14e+00', '20.2 %e'; # weird test 13:18
evalable6 ok 1 - 20.2 %e
Guest2775 gcc outputs 3.14e+00 13:30
lizmat which is what one would expect if the precision is 2 13:31
I wonder why that test is also testing for the erroneous result with 3 digits in the exponent
Geth roast: 0958f70f08 | (Elizabeth Mattijsen)++ | S32-str/sprintf-f.t
Simplify sprintf("%f") testing, also test "F"

The results of all formats with "#" are the same to the results without
  "#", so create those tests programatically. Also perform the tests for "F"
by uppercasing the format string.
13:32
roast: 78d5fe2c9e | (Elizabeth Mattijsen)++ | S32-str/sprintf-e.t
Simplify sprintf("%e") testing, also test "E"

The results of all formats with "#" are the same to the results without
  "#", so create those tests programatically. Also perform the tests for "E"
by uppercasing the format string and the expected result.
13:38
lizmat afk for a bit& 13:39
s/if the precision is 2//
Geth roast: 2624c93600 | (Elizabeth Mattijsen)++ | S32-str/sprintf-d.t
Simplify sprintf("%d"|"%i") testing

The results of all formats with "#" are the same to the results without
  "#", so create those tests programatically.
13:53
roast: 795a3b67ac | (Elizabeth Mattijsen)++ | S32-str/sprintf-u.t
Simplify sprintf("%u") testing

The results of all formats with "#" are the same to the results without
  "#", so create those tests programatically.
13:55
lizmat really afk& 13:56
15:47 patrickb left 15:56 patrickb joined
japhb timotimo: `strace -r -T -o strace-out -f -ff perl6 my-top-level-program` gave interesting results. For one thing, it started a bit slower, but performance seemed steady -- it had short pauses of < 1 second every 1-2 dozen subprocess launches, but continued for 1000 subprocesses without incident 16:03
In comparison, just removing the strace, perl6 ground to a halt after launching 198 subprocesses.
heisenbug. :-( 16:04
16:37 vrurg left, vrurg joined
japhb timotimo, jnthn: Any idea what else to try in tracking this down? 17:28
timotimo "perf record" could help you find out what it's doing; github.com/Netflix/flamescope would let you go by time-of-execution, i.e. what was done near the start, what was going on while it was so slow 17:44
japhb Actually, before I try perf, I'm going to try *not* following the children in the strace and see if that makes a heisendifference. 17:56
timotimo mhm 17:57
japhb Oooh, tracing only the parent (not following the children) slows to a halt the same way it does with no tracing at all. So something about the tracing wrapper affects the behavior of the kids.
japhb wonders about -D 17:58
timotimo japh-D?
japhb trace as a Detached grandchild of the tracee, rather than as a wrapper of the tracee. 17:59
timotimo hm, not sure that makes such a big difference
but perf ought to have much less of an impact
there's also perf trace which gives similar data to strace, but much much faster
japhb OK, strace -D of just the parent fails slightly faster than otherwise (189 children)
timotimo and then there is the ftrace tracing framework in the linux kernel that just lets you do anything and everything you could possibly imagine
japhb timotimo: Ah, interesting. I'll try that after my last strace variant 18:00
OK strace -f -D (Follow and Detach) is even smoother than just strace -f; it completes all 1000 subprocesses at (as far as I can tell) a nearly perfectly even rate. 18:01
Bah, meeting 18:02
timotimo oh, bleh
18:18 lucasb joined 20:07 patrickb left 20:11 patrickb joined
jnthn japhb: Are you using Proc::Async directly, or the sync layer atop of it? 20:53
japhb jnthn: Sync layer on top of it (intentionally, as I was trying to demonstrate to someone how their Perl 5 code using `open '|-', ...` would look in Perl 6. But the demo doesn't work so well if the Perl 6 version deadlocks. ;-P 21:10
Geth nqp: 8fc46c3b98 | usev6++ | src/vm/jvm/runtime/org/perl6/nqp/runtime/ExceptionHandling.java
Fix typo in comment
21:12
jnthn japhb: Yeah, it's interesting to know if it deadlocks on Proc::Async also 21:16
japhb jnthn: OK, I'll try to work up an equivalent in Proc::Async and see what happens. 21:20
21:28 vrurg left, vrurg joined
japhb jnthn: Can gather/take and react/whenever cross-nest? (Either as `@results = gather { react { whenever { take } } }` or as `react { @results = gather { whenever { take } } }`) 21:42
samcv . 21:48
yoleaux 16 Apr 2019 07:42Z <jmerelo> samcv: OK. Good luck. Keep me posted.
jnthn No 21:55
japhb jnthn: That's what I figured, just wanted to confirm. 22:09
Dang it, my Proc::Async version dies with an error message after a few processes. 22:10
It fails with `This process is not opened for write` after 4-5 subprocesses, even though I created all of them with `my $proc = Proc::Async.new: :w, << my-child and args here >>`. 22:12
timotimo perhaps some sharing is going on between the threads? 22:13
japhb timotimo: I'm not sure how. I'm following the basic structure of jnthn's answer in stackoverflow.com/questions/552651...-is-this-a 22:14
... except I'm writing in to the proc, rather than reading from it.
But otherwise pretty dang similar. 22:15
timotimo hmm, yeah, that's probably correct :/ 22:16
rakudo version and such? 22:17
japhb This is Rakudo version 2019.03.1-213-g7ee08bb10 built on MoarVM version 2019.03-76-gc10fee65c 22:19
timotimo thought so, OK
jnthn Worth an issue, for sure.
japhb jnthn: Rakudo, I assume? 22:20
I'll see if I can golf a bit
22:23 vrurg left, vrurg joined
jnthn japhb: Yes, though not sure where the bug will turn out to be 22:29
MasterDuke jnthn: have you looked at github.com/rakudo/rakudo/issues/2827 any? it bisected to github.com/rakudo/rakudo/commit/46...6034408c8, but i don't really know why and haven't been able to figure out much since 22:32
22:33 AlexDaniel joined
AlexDaniel p6lert: help 22:34
p6lert AlexDaniel, github.com/perl6/alerts P6lert commands: [insta]?add ALERT, update ID ALERT, delete ID; ALERT format: ['severity:'\S+]? ['affects:['<-[\]]>+']']? ALERT_TEXT
AlexDaniel p6lert: severity:info Future Rakudo and NQP releases will come with tarballs that are signed with a different key. This only affects those who run automation that performs related checks. 22:41
p6lert: add severity:info Future Rakudo and NQP releases will come with tarballs that are signed with a different key. This only affects those who run automation that performs related checks. 22:42
p6lert AlexDaniel, Added alert ID 10: alerts.perl6.org/alert/10
AlexDaniel kawaii: this is an interesting tool ā†‘
kawaii: see all alerts here: alerts.perl6.org/
use it if things go wrong :)
jnthn MasterDuke: No; it's not really on my todo list either. github.com/rakudo/rakudo/issues/2805 I hope to figure out tomorrow. 22:54
japhb jnthn, timotimo: OK, golfed and added the async version of the problem in github.com/rakudo/rakudo/issues/2847 22:57
MasterDuke jnthn: no worries, i wasn't really thinking of it as a release blocker or anything like that, just annoying. but any suggestions for where to look? i was able to change some of the behavior, but it didn't really help me understand what/how to fix 22:58
jnthn MasterDuke: No immediate guesses, alas; I'd probably run it with debug server and then look at the stack trace or some such 23:01
MasterDuke k, i'll see if i find anything useful 23:02
jnthn sleep time o/ 23:10
japhb jnthn, timotimo: Synchronous case added as github.com/rakudo/rakudo/issues/2848 23:17
ugexe in the synchronous case it is deadlocking -- see: github.com/rakudo/rakudo/issues/2834 23:20
i would guess that strace slows things down enough that the deadlock doesn't occur 23:22
japhb ugexe: Why does it get slower for a while before actually deadlocking? 23:26
Also, this is a LOT more spawns than just the thread count, and the spawns don't happen super fast (a few per second, not thousands per second) 23:27
ugexe but does it get further before deadlocking when setting RAKUDO_MAX_THREADS=500 ?
japhb And finally, I'm not using 'start' at all -- I'm synchronously running the children. There should only ever be one child alive.
Need to head for a bus, will check when I get there. 23:28
ugexe you assume that sync run/shell do not themselves spawn threads 23:29
im guessing something like rakudo slows down when it tries to create some more FOO type threads in the thread pool, until it eventually reaches the maximum threads when trying to create more e.g. reader threads 23:34
i don't see any explicit thread spawning in Proc.pm6 itself, but it sure seems like its happening indirectly 23:37
well, the promises 23:41
23:47 lucasb left
japhb ugexe: Sure, definitely wrapping the sync API around the async API will result in the thread pool doing a lot of work while the child is alive ... but why would it still be holding resources after the synchronous run completes? 23:50
ugexe it must depend on how fast new processes are spawned, at least i'm guessing based on the fact using strace makes the problem go away 23:51
MasterDuke what about adding a small sleep after each process is finished?
ugexe so how fast new processes are spawned vs whatever rakudo is doing in the background to create the needed threads to read/whatever/close 23:52
japhb MasterDuke: Just tested with 'sleep .1;' after closing the input pipe, and while it slowed down the overall run, the behavior was essentially the same (and deadlocked after nearly the same number of children). Interestingly, since it was going slower, I watched it in atop and noticed the CPU stayed relatively low (11% of one core) for a while before suddenly ramping up near the end to nearly filling all 23:56
cores.
So something has very non-linear behavior.
MasterDuke and does smaller or larger values for MAX_THREADS change anything? 23:58
ugexe when running with RAKUDO_SCHEDULER_DEBUG=1 i eventually get 23:59