Zoffix timotimo: well I don't know how to parse out the relevant bits 00:00
MasterDuke timotimo: in stackoverflow.com/questions/468672...in-perl-6, his second version where he loops over his tokens could be made a lot faster by changing `for @search -> $token {` to `for @search -> Str $token {` and then removing the '{}' in the grep 00:03
i don't have an SO account 00:04
Zoffix AlexDaniel`: I don't get how to use your golfed server... It seems everything just quits right away and no tests get run 00:12
AlexDaniel`: with this: gist.github.com/zoffixznet/88d58e5...1499fdc689 00:13
AlexDaniel`: nm, was mising `use` for Cliet 00:17
The one thing I'm noticing changing `whenever $sock.Supply -> $got {` to `$sock.Supply.act: -> $got {` kinda makes the behaviour the same (broken on both 2017.09 and HEAD)... So are the docs wrong that `whenever` is like calling .act? docs.perl6.org/language/concurrenc...y-whenever 00:20
s: &WHENEVER
SourceBaby Zoffix, Sauce is at github.com/rakudo/rakudo/blob/eb1f...y.pm#L2009
Zoffix ahh
ugexe i thought react/whenever makes sure everything gets initiated properly before you start it 00:21
Zoffix ahhh
timotimo MasterDuke: do you have a factor of speed improvement? or fraction of time spent? 00:22
gotta go to sleep before i report on it though 00:23
gnite!
MasterDuke timotimo: on my machine went from 1.4s to .5s
Zoffix \o
MasterDuke later...
timotimo so subtracting startup it might have been 1.3 to 0.4 so like 3.3x faster? 00:24
anyway, laters
MasterDuke probably about that yeah 00:25
travis-ci Rakudo build failed. Zoffix Znet 'Bump NQP' 01:21
travis-ci.org/rakudo/rakudo/builds/292329779 github.com/rakudo/rakudo/compare/b...1febd56583
buggable [travis build above] āœ“ All failures are due to: GitHub connectivity (1 failure).
Zoffix has successfully extracted WebSocket module from the buggy code 02:10
ZofBot: on to HTTP::Tiny::Server-ectomy! 02:11
ZofBot Zoffix, timotimo++ :-)
Zoffix hah
All modules out. 23-line golf now \o/ 02:42
3 lines + 2 lines of a sleep and a brace. 02:58
Good enough to debug it now...
man, would be nice to eat this bug.... 03:04
ZofBot: I bet it's delishus
ZofBot Zoffix, He pushed it away from him, aware only that it was the memory of some action which he would have liked to undo but could not
Zoffix ZofBot: how's your brain now? 03:06
ZofBot Zoffix, _I am just off to the theatre
Zoffix OK then
holy shit. I fixed it! 03:29
Now, onto figure out why :)
So "supervisor" doesn't watch the afinity workers, does it? 04:02
Geth rakudo/nom: 176a6fae07 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
Fix incorrect queue size measurement

We're mistakenly calling .elems on the AffinityWorker, which will just return 1, messing up our measures of which worker is less busy.
Use its .queue instead; we already grabbed it into a var a few lines up.
04:03
Zoffix (that's not THE bug fix; just something else I spotted) 04:04
Well, I think imma give up and add a cheatsy fix and a test and jnthn++ can then check it out and see how to make it sane 04:15
Geth roast: 74445ddf8a | (Zoffix Znet)++ | MISC/bug-coverage-stress.t
Add test for hang in supply in a sock

  github.com/tokuhirom/p6-WebSocket/...-339120879
RT #132343
04:25
synopsebot RT#132343 [new]: rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
Zoffix ^ That's the golf of the issue
AlexDaniel` Zoffix: awesome!! 04:27
Zoffix stresstesting a fix that fixes it ATM. But it's just a hack that adds another worker 04:28
AlexDaniel` actually that sounds about right? 04:30
anyway, I'm leaving for ā‰ˆ8 hours, see you later 04:31
o/
Zoffix Thinking more of it, it might not even fix the module, just the test.
Well. I'm guessing jnthn will come back tomorrow and it'll give him an idea of what's to fix :)
ZOFVM: Files=1283, Tests=152774, 160 wallclock secs (22.04 usr 3.93 sys + 3415.68 cusr 203.89 csys = 3645.54 CPU) 04:32
AlexDaniel` ZofBot: death to timezones!
ZofBot AlexDaniel`, The change in the mater is marvellous
Geth rakudo/nom: ce7e5444a2 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
Add hackish fix deadlock for supply in a sock

  github.com/tokuhirom/p6-WebSocket/...-339120879
RT #132343 Test: github.com/perl6/roast/commit/74445ddf8a
Just pop in another worker in a case where we have just one and whose queue is empty. This fixes the bug demonstrated by the test, but it doesn't address the core cause as I don't understand it.
Need proper fixin'.
04:33
AlexDaniel` Zoffix++ great progress
AlexDaniel` &
Zoffix Yup. Just as I figured. It fixes the test but not the module -_- 04:37
dammit 04:38
Well, the other fix would be to remove :hint-affinity in .queue methods in Async sock and make it use regular queue 04:39
.tell jnthn I golfed AlexDaniel`'s issue into a test: github.com/perl6/roast/commit/74445ddf8a and committed a hack that makes the test pass: github.com/rakudo/rakudo/commit/ce7e5444a2 but WebSocket module still fails its tests because I'm guessing it got more than one AffiniteWorker active so teh deadlock still happens and isn't fixed by my hack. No idea why the deadlock actually occurs 04:41
yoleaux Zoffix: I'll pass your message to jnthn.
Zoffix so dunno how to fix. My hack needs to be reverted
.tell jnthn so dunno how to fix. My hack needs to be reverted 04:42
yoleaux Zoffix: I'll pass your message to jnthn.
Zoffix drops to bed
AlexDaniel` I think there's only one affinity worker there 04:51
travis-ci Rakudo build passed. Zoffix Znet 'Fix incorrect queue size measurement 05:13
travis-ci.org/rakudo/rakudo/builds/292427250 github.com/rakudo/rakudo/compare/e...6a6fae076a
Rakudo build passed. Zoffix Znet 'Add hackish fix deadlock for supply in a sock 05:58
travis-ci.org/rakudo/rakudo/builds/292434436 github.com/rakudo/rakudo/compare/1...7e5444a2c4
[Tux] test-t 3.417 - 3.631 (/me runs it again) 07:36
This is Rakudo version 2017.09-503-gce7e5444a built on MoarVM version 2017.09.1-622-g6e9e89ee 07:39
csv-ip5xs 1.183 - 1.200
test 11.841 - 11.852
test-t 3.110 - 3.152
csv-parser 12.454 - 13.021
tux.nl/Files/20171025095340.png ā† that was in my spam folder. perl5 or perl6? Spam or not? (I'm not going to answer that) 07:54
JimmyZ [Tux]: I confirmed it, it's not a spam 08:04
[Tux] so, should I (try to) answer that? 08:06
FWIW I am not a mac user, so anyone else could do a better job then my saying "zef install Text::CSV" of "cpan "Text::CSV" 08:07
JimmyZ [Tux]: yeah, he is a developer
[Tux] s/" o\Kf/r/
sent 08:08
JimmyZ I don't know about he too, just confirmed it from qq.com :p, and thanks 08:09
[Tux] commutes ... 08:10
Zoffix AlexDaniel`: like in total? Nah. I can repro the bug again in my test if I just fire off a few listen socks before the original test: gist.github.com/zoffixznet/2b23640...6a1a199536 09:04
And yeah, it bails out exactly where I expected it would, in the loop before my hack 09:05
oh sweet. found another piece of a puzzle 09:10
starting to understand the bug 09:32
... get in its head. Oh yeah...
lizmat Zoffix: looks like ce7e5444a2c4aa69c2e has a severe performance effect on hyper: test-t --hyper from 1.16 -> 1.47 seconds :-( 09:59
argh, sorry for the n oise 10:00
I also had a 1202 running :-(
Zoffix :)
FWIW R#1209 has some failures without pack/unpack involved, but I didn't quite understand if it were just the tests or if it were meant to work 10:01
synopsebot R#1209 [open]: github.com/rakudo/rakudo/issues/1209 Most Blob/Buf subtypes are specced, documented, broken and unusable
lizmat I already replied to that: I'd rather see pack/unpack deprecated -> removed and PackUnpack distro worked on 10:02
Zoffix yeah
samcv what is the issue with pack/unpack? is there some other alternative for working with binary data? as i know it we have tons of functions for worknig with strings but none really for extracting stuff from binary data 10:04
or is it just that it's sufficiently complex? maybe we need some simple things that can do parts of what pack does without tons of overhead? 10:05
so you can at least extract parts of a buffer as certain types of data even if we don't have the full template type thing that pack/unpack has hmm
Zoffix I only remember someone having an experimental impl of it ages ago (2015?) and no one caring about pack/unpack really. Seems only a handful of Perl 5 users kinda expect that feature to be a given ĀÆ\_(惄)_/ĀÆ 10:06
DrForr was considering Spreadsheet::WriteExcel, IIRC it needs pack()/unpack() pretty extensively... 10:07
(just checked the tests...
)
samcv i might be willing to work on pack/unpack since i think it's important, even if it doesn't seem that important, if we don't have it then it would be a gap in perl6 imo 10:08
lizmat my "dream" is still a combination of a special encoding and syntactic sugar that would allow you to use the regular expression engine on binary data
samcv that would be great 10:09
lizmat all we really need I think, is a way to encode each byte value to a synthetic
and a way to specify that synthetic in a regular expression
samcv is that efficient?
if you had a lot of data 10:10
DrForr samcv: Agreed, it's needed, especially important for Excel and friends, which for better or for worse still runs a lot of businesses.
samcv i remember when i was working with Buf the main thing sticking out was, there aren't many ops to work with them
lizmat was hoping [Tux] would be sufficiently inspired to take over PackUnpack :-)
samcv you can create buf's but... using them..
|Tux| if I was jobless and motivated enough, sure :P 10:11
lizmat hehe :-)
samcv is pack/unpack in perl6 supposed to basically be a copy of perl5's functionality? 10:12
lizmat well, that's the question
pack/unpack comes from a completely untyped world
it always felt like a poor fit for the more typed Perl 6 world to me 10:13
samcv that could very well be the case. though i do think there needs to be some way to work with Buf's hmm 10:16
DrForr I think too it's more akin to "Here's a C data structure serialized, really it should be a C library but too much work." 10:17
|Tux| DrForr - for *me* pack/unpack is a very convenient way to store data in hashes and hash *KEY*s to be able to sort on combined keys and efficiently and FAST pass data around perl5 process-flows 10:19
serialization is not my first goal
Ā«pack "s>s>s>", 101, 25, 101;Ā» is something I use hundreds of times throughout my perl5 apps 10:20
a second reason is that on old machines with not enough memory, packed strings are way more efficient in large hashes that anonymous lists of the same content 10:22
DrForr Okay, that's fair enough. Would a library (like Liz's suggestion) still meet performance goals? Just thinking about being able to test different implementations... 10:26
|Tux| For the systems I need that for, perl6 is not an option :)
I bet it won't even build on an old HP-UX 11.11 with just 1 Gb of RAM 10:27
lizmat DrForr: there is such a library already: modules.perl6.org/search/?q=PackUnpack
DrForr The old machines, certainly I can see that not being an option.
Geth nqp/master: 6 commits pushed by pmurias++
lizmat [Tux]: I think libuv doesn't live on HP-UX, so that's a nono to start with
Zoffix: another datapoint on the #1202 saga: I can't get the code to crash if I change the say to a print 10:29
AlexDaniel` GH#1202
synopsebot GH#1202 [open]: github.com/rakudo/rakudo/issues/1202 [severe] Async qqx sometimes hangs or dies ( await (^5).map({start { say qqx{ā€¦ ā€¦} } }) )
lizmat I changed it to print qqx{ ... }.chomp 10:30
so it would take up less screen space
ahhh.. got it to crash after 7 minutes 10:32
while it was running a spectest at the same time
ok, now after 43 secs 10:34
seems I only generate noise today :(
Zoffix wasn't following 1202 10:35
Zoffix backlogs #perl6 10:45
I'm with moritz... perl++ is an awful name and despite the OP of the blog post quoting two of my articles feels like they missed the point of why I wanted to rename :/ 10:46
For the moment I'm behind TimToady's "psix with p silent if you want to" name. It seemed to have a bunch of support from people (even more if you include those who like "P6" variant) and I like the somewhat secret (at least it was to me) reference to some literature or whatever it was to :) 10:47
gfldex I wonder if we should add a non-parallel version of Ā». before adding autothreading. With >. one could fix fallout from the change easily. 10:51
Zoffix huggable: psix :is: psix is reference to en.wikipedia.org/wiki/Psmith
huggable Zoffix, Added psix as psix is reference to en.wikipedia.org/wiki/Psmith
lizmat update on GH #1202: if I put a print before the await, it doesn't seem to crash (running more than 10 minutes now) 10:52
synopsebot GH#1202 [open]: github.com/rakudo/rakudo/issues/1202 [severe] Async qqx sometimes hangs or dies ( await (^5).map({start { say qqx{ā€¦ ā€¦} } }) )
lizmat which would indicate a race condition on the encoder...
*initialization of 10:54
Zoffix and update on RT#132343: bug seems something to do with $*AWAITER. The changes to it are commits github.com/rakudo/rakudo/commit/54...cd4a5c2d9f and github.com/rakudo/rakudo/commit/26...822686f507 that mention queuing and deadlocking... seems like exactly what's happening now and I'm guessing the new mechanism queues up the stuff to run 10:59
afterwards but "afterwards" doesn't happen or something. Like this code hangs gist.github.com/zoffixznet/4254b94...4957c2835a but if I shove the .list of supply {} into a separate promise, it works fine: gist.github.com/zoffixznet/59f4b46...b373fff6fa
synopsebot RT#132343 [open]: rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
Zoffix gives up on it for now.
Geth rakudo/nom: 794235a381 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
Revert "Add hackish fix deadlock for supply in a sock"

This reverts commit ce7e5444a2c4aa69c2e4421f02a287241199318e.
This commit is pointless and doesn't fix the bug in real-life code.
11:09
lizmat :-(
Geth roast: 45dcf9cd65 | (Zoffix Znet)++ | MISC/bug-coverage-stress.t
TODO-fudge supply-in-a-sock deadlock test

Also fire off a few more socks to cover the deadlock that
  github.com/rakudo/rakudo/commit/794235a381 did not fix
11:11
roast: 7df4b4c4dd | (Zoffix Znet)++ | packages/Test/Util.pm
Prevent generation of `typescript` file by run-with-tty test

It's made by the `script` command and looks like there's a difference between Bodhi Linux and Debian `script` impls as to how the filename for that needs to be specified
11:18
Zoffix Considering Bodhi Linux is a fork of Ubuntu that's a fork of Debian, makes me wonder how flimsy that test really is :/ 11:19
Geth rakudo/affinity-worker-workaround: 418dbbd8a3 | (Zoffix Znet)++ | src/core/IO/Socket/Async.pm
Use general queue in async sock

Workaround for RT#132343 and
  github.com/tokuhirom/p6-WebSocket/...-339120879
that simply bypasses the affinity queue and uses the general one.
This like undos the benefits mentioned in ... (5 more lines)
11:38
synopsebot RT#132343 [open]: rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
Zoffix .tell AlexDaniel this branch fixes the bug in WebSocket but it does it by just avoiding the affinity queue: github.com/rakudo/rakudo/commit/418dbbd8a3 I guess that's fine if a release has to be made ĀÆ\_(惄)_/ĀÆ 11:39
yoleaux Zoffix: I'll pass your message to AlexDaniel.
Zoffix .tell jnthn reverted my hack and hardened the test to cover the stuff my hack didn't fix. A bit more debug info on the bug here: irclog.perlgeek.de/perl6-dev/2017-...i_15351230 11:40
yoleaux Zoffix: I'll pass your message to jnthn.
Zoffix &
Geth nqp: 3fbc06669b | pmurias++ | src/vm/jvm/runtime/org/perl6/nqp/runtime/Ops.java
[jvm] Implement nqp::ordbaseat
11:47
nqp: 4c2ffcb144 | pmurias++ | 6 files
[jvm] Implement nqp::multidimref_* ops
11:49
nqp: 4e2d7c7bfa | pmurias++ | t/moar/07-eqatic.t
Fix typo in test description
11:51
nqp: b8f7784d40 | pmurias++ | t/nqp/102-multidim.t
Test nqp::multidimref_* ops
rakudo/js: 580a232e2f | pmurias++ | 3 files
Use multidimref_* on all backends
11:58
pmurias lizmat++ Zoffix++ # fixing .grab bug that was causing a extra test failure on the js backend 12:33
Zoffix doesn't remember fixing anything like that... 12:34
lizmat only vaguely remembers about nativeints and bigints 12:38
pmurias Zoffix: you wrote the test ;) 12:49
Zoffix Ah ok :)
lizmat afk for a few hours 14:06
Geth rakudo/nom: 97b11edd61 | pmurias++ | src/core/Buf.pm
Use nqp::bitneg_i instead of a nqp::bitxor_i and a mask
14:28
gfldex where does ā€žToo many arguments in flattening array.ā€œ come from? 14:47
Zoffix $ grep -FRn 'Too many arguments in flattening array' nqp 14:50
nqp/MoarVM/src/core/args.c:742: MVM_exception_throw_adhoc(tc, "Too many arguments in flattening array.");
gfldex code that is triggering it: gist.github.com/gfldex/b356f074c48...cfbb42e5bd 14:51
Zoffix m: (1...100000).race
camelia ( no output )
gfldex RaceSeq is lazy 14:52
Zoffix m: @ = (1...100000).race
camelia ( no output )
Zoffix *shrug* that's the error you get when you try to slip too many args
m: say |(1...100000)
camelia Too many arguments in flattening array.
in block <unit> at <tmp> line 1
Zoffix So maybe the guts are slipping too much somewhere 14:53
--ll-exception will probably tell where
gfldex looks
Zoffix So the WebSocket issue hangs here: github.com/rakudo/rakudo/blob/nom/...y.pm#L1948 14:56
gfldex stactrace: imgur.com/a/ilqgg 14:58
m: say [max] (1..100000) 14:59
camelia Too many arguments in flattening array.
in block <unit> at <tmp> line 1
gfldex uses issue paper 15:00
there are likely more such bugs lurking, because 1 year ago nobody would have dared to touch a 100000 element list :-> 15:03
m: say [max] (1..2**15) 15:04
camelia 32768
gfldex m: say [max] (1..2**16)
camelia Too many arguments in flattening array.
in block <unit> at <tmp> line 1
AlexDaniel` is it a bug?
gfldex AlexDaniel`: how would you justify a reduction operator that can't reduce large lists? 15:05
Zoffix AlexDaniel`: you have a robo message for AlexDaniel... Dunno when you wanted to do the release... 15:06
AlexDaniel` Zoffix: I've seen it, thanks
Zoffix: there's a wild jnthn on github, but I don't know if he'll respond to anything related to the sched issue 15:07
Zoffix :)
AlexDaniel .
yoleaux 11:39Z <Zoffix> AlexDaniel: this branch fixes the bug in WebSocket but it does it by just avoiding the affinity queue: github.com/rakudo/rakudo/commit/418dbbd8a3 I guess that's fine if a release has to be made ĀÆ\_(惄)_/ĀÆ
AlexDaniel` I just came home so I'll wait a bit before doing any bad decisions :) 15:09
gfldex: fwiw, have you seen this? docs.perl6.org/language/traps#Argu...ount_Limit 15:10
Zoffix So what do you do when you can't just swap `.push` to `.append`? 15:11
AlexDaniel` why can't you?
Zoffix That limit is a bit of a thorn. 'cause any time you're doing `.something: |@a` you're risking a crash if you don't know how bit @a is 15:12
AlexDaniel` Zoffix: yes, so the point is that with the current implementation you should not do that
Zoffix AlexDaniel`: 'cause there's no alternative routine?
AlexDaniel` m: say (1..2**16).reduce(&max)
camelia 65536
AlexDaniel` Zoffix: dunno, if I ever get into that situation I'd just cry 15:14
Zoffix m: class Foo { method process(@x) {} }.new.process: 42, |(1..100000)
camelia Too many arguments in flattening array.
in method process at <tmp> line 1
in block <unit> at <tmp> line 1
Zoffix m: class Foo { method process(*@x) {} }.new.process: 42, |(1..100000)
camelia Too many arguments in flattening array.
in method process at <tmp> line 1
in block <unit> at <tmp> line 1
AlexDaniel` m: class Foo { method process(@x) { say @x } }.new.process: @(42,|(1..100000)) 15:15
camelia (42 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 8ā€¦
AlexDaniel` m: class Foo { method process(@x) { say @x } }.new.process: (42,|(1..100000))
camelia (42 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 8ā€¦
Zoffix m: class Foo { method process(**@x) {@x.elems.say} }.new.process: (42, |(1..100000)) 15:19
camelia 1
AlexDaniel` argh, there was a ticket asking to change the behavior of [ā€¦] with one arg
can't find it 15:20
ah RT128758
ah RT#128758
synopsebot RT#128758 [open]: rt.perl.org/Ticket/Display.html?id=128758 Reduce with numeric ops does not numify things if only one arg is passed ([*] set(1,2,3))
AlexDaniel` no, that's not it 15:21
but very close
Zoffix m: say &infix:<*>(set 1, 2, 3)
camelia 3
Zoffix m: say [*] set 1, 2, 3
camelia 3
jnthn evening, #perl6 o/ 15:29
yoleaux 23 Oct 2017 19:55Z <lizmat> jnthn: I wonder whether the difference between .hyper and .race is really whether results are buffered or not
23 Oct 2017 19:55Z <lizmat> jnthn: I could see .race internally working as a Supply, emitting values from several threads as they become available
23 Oct 2017 19:57Z <lizmat> jnthn: instead of .pushing to an IterationBuffer until the end of the batch
24 Oct 2017 12:29Z <lizmat> jnthn: I think I've reduced github.com/rakudo/rakudo/issues/1202 to a pure MoarVM issue
24 Oct 2017 20:48Z <AlexDaniel`> jnthn: if it happens that you come back before we figure it out, here is a thing to look at github.com/tokuhirom/p6-WebSocket/...-339120879 (the release was delayed for other reasons anyway, so would be great to fix this thing also)
04:41Z <Zoffix> jnthn: I golfed AlexDaniel`'s issue into a test: github.com/perl6/roast/commit/74445ddf8a and committed a hack that makes the test pass: github.com/rakudo/rakudo/commit/ce7e5444a2 but WebSocket module still fails its tests because I'm guessing it got more than one AffiniteWorker active so teh deadlock still happens and isn't fixed by my hack. No idea why the deadlock actually occurs
04:42Z <Zoffix> jnthn: so dunno how to fix. My hack needs to be reverted
11:40Z <Zoffix> jnthn: reverted my hack and hardened the test to cover the stuff my hack didn't fix. A bit more debug info on the bug here: irclog.perlgeek.de/perl6-dev/2017-...i_15351230
jnthn o.O
Zoffix \o/ 15:30
timotimo greetings jnthn 15:31
Zoffix FWIW: <Zoffix> So the WebSocket issue hangs here: github.com/rakudo/rakudo/blob/nom/...y.pm#L1948
timotimo no need to do point releases this time, at least not yet :P
AlexDaniel` timotimo: ą² _ą²  15:32
timotimo ftr i'm glad we haven't hit the big red release button yet
jnthn Oh, no release?
Why?
timotimo things be broken 15:33
jnthn Such as?
Geth rakudo/affinity-debugging-stuff: ca46390fcd | (Zoffix Znet)++ | 2 files
Share affinity debugging info
AlexDaniel` jnthn: e.g. getc was not working on macos, now we're looking at the sched bug
(both things regressed) 15:34
jnthn Hm, getc? Interesting.
That one got fixed?
AlexDaniel` yeah github.com/MoarVM/MoarVM/pull/731
Zoffix FWIW ^ that branch got debug prints I added all over the place and this is the working code gist.github.com/zoffixznet/86d1eaa...68a99f64c0 and if you comment out that "start @ =" it'll hang
running with ZZ=1 ZZ4=1 perl6 teh-script 15:35
jnthn ah, ok
Zoffix Actually it won't hang it just won't print the "but not here" line
But in real code that's a hang
jnthn On affinity scheduling stuff, my guess is that something manages to create a dependency between two affinity-scheduled things. That would be a kinda odd thing to have happen, but I guess test code that ends up with client and server in the same program could hit such things. 15:36
It's also far more likely to happen under 6.c await semantics 15:37
A reasonable thing to do is probably to have the supervisor look to see if there are affinity workers making no progress, and just steal work from them into the general queue.
Zoffix problem exists in v6.d.PREVIEW too 15:38
jnthn We can't really fix it in terms of creating more affinity workers.
Zoffix Supervisor doesn't currently care about affinity workers tho, right?
jnthn Right
Zoffix OK
jnthn Does the problem case in question have a client and server in the same program? 15:39
Zoffix Yes
jnthn Where the first connects to the second?
Zoffix Yup
jnthn OK, then the solution I suggested will probably do it
AlexDaniel` is it really required to reproduce it?
jnthn AlexDaniel`: My guess would be yes
Zoffix OK. Unless someone beats me to it, I'll give it a go in 5hr
AlexDaniel` I thought having server and client separately makes no difference
at least that's how I repro-ed it in the first place 15:40
Zoffix This is the code that repros the bug and it got server and client in same code. Didn't try having them in separate programs: github.com/perl6/roast/blob/master...t#L92-L104
Zoffix &
jnthn Hmmm 15:41
But in that code it's using a sync socket to do the testing
Also 15:42
IO::Socket::Async.listen: '127.0.0.1', 15556 + $_ for ^10;
The supplies are never tapped, so it won't actually use up any sockets at all there?
s/use/fire/
ohhh... .list :/ 15:43
Goodness, that's asking for trouble
But yeah, the fix I suggested will do it 15:44
AlexDaniel` yes, just tried it, the client can be in a separate script
jnthn Yeah, it's the .list
It blocks up an affinity worker 15:45
AlexDaniel` jnthn: also, not sure if you saw it or not, but github.com/jnthn/oo-actors/issues/6 and github.com/jnthn/p6-test-scheduler/issues/3
jnthn Yeah, test-scheduler needing attention doesn't surprise me in the slightest, given how much the real scheduler has changed. 15:46
The oo-actors one is more surprising
AlexDaniel` my understanding is that both are non-issues for the release, but I don't know if there are any bigger underlying problems with these 15:47
so let me know if there's anything important
jnthn .tell lizmat The difference between hyper/race is whether we - at the end of the pipeline - just hand results back whenever they are ready, or instead ensure we hand back results relative to their input order. All the other differences that pop up are based on what you can get away with under race, but couldn't under hyper. 15:49
yoleaux jnthn: I'll pass your message to lizmat.
jnthn .tell lizmat A Supply emitting results whenever they're available isn't a partiuclarly good design for hyper/race, as that implies concurrency control per result, not per batch of results. 15:53
yoleaux jnthn: I'll pass your message to lizmat.
jnthn m: my $n = BagHash.new: "a" => 0, "b" => 1, "c" => 2, "c" => 2; say $n.perl 15:57
camelia (:c(2)=>2,:b(1),:a(0)).BagHash
jnthn m: my $n = BagHash.new-from-pairs: "a" => 0, "b" => 1, "c" => 2, "c" => 2; say $n.perl
camelia ("b","c"=>4).BagHash
jnthn It might be less confusing if the .new example at docs.perl6.org/type/BagHash#Creati...sh_objects didn't use pairs that look like they'll be providing weights as its first example 15:58
timotimo cool, sounds like the release could be close to go-mode :) 15:59
Zoffix Filed as D#1629 16:00
synopsebot D#1629 [open]: github.com/perl6/doc/issues/1629 [docs][Hacktoberfest][LHF] Improve BagHash.new examples
jnthn Zoffix++
[Coke] samcv: no one is building docs on windows, as the makefile is busted with nmake. 16:01
so I'll rip out the non-async variant of the highlighter.
jnthn Noticed it while commenting on github.com/rakudo/rakudo/issues/1203 16:03
Still a bit tired from travelling back from vacation, so gonna go rest some 16:05
Zoffix: I doubt I'll find energy tonight to look at the affinity scheduler thing, though if you don't manage it I can look tomorrow probably. Hints: remember that $!affinity-workers and other such Lists are immutable and so should always be read from the attribute exactly once into a lexical, and then only ever accessed through that lexical, to get a consistent view. 16:06
Zoffix OK 16:07
jnthn Also, I think some kind of "how many times did we ask since this worker last completed an item" may be needed
We don't want to steal *too* eagerly
Otherwise we lose the point of affinity scheduling 16:08
I guess "how many times did we see it not make progress" is perhaps a better way of expressing it
jnthn bbl 16:09
Zoffix buggable is ded again :| 16:35
buggable: zen gimme some 16:36
buggable Zoffix, "Zen has no business with ideas."
Zoffix Great :(
m: with $*TMPDIR.add: "foo" { .spurt: buf64.new: 1000000000, 2, 3; .s.say; .slurp(:bin).say } 17:16
camelia '/tmp/foo' is a directory, cannot do '.open' on a directory
in block <unit> at <tmp> line 1
Zoffix m: with $*TMPDIR.add: "foofdasdsadas" { .spurt: buf64.new: 1000000000, 2, 3; .s.say; .slurp(:bin).say } 17:17
camelia write_fhb requires a native array of uint8 or int8
in block <unit> at <tmp> line 1
[Coke] why is it "my role X::Temporal is Exception { }" but "my role X::Comp { ... }" 17:18
(in rakudo source) - found while trying to true up the type graph 17:19
(er, the type graph in perl6/doc) 17:20
timotimo would like to see the supervisor thread do less cpu usage 17:22
though perhaps it got faster since the last time i looked
ugexe m: sub foo(+x = 1) { x }; say foo(); 17:25
camelia ===SORRY!===
At Frame 2, Instruction 1, op 'param_sp' has invalid number (3) of operands; needs 2.
AlexDaniel` ouch 17:26
c: 2015.07.2 sub foo(+x = 1) { x }; say foo(); 17:27
committable6 AlexDaniel`, Ā¦2015.07.2: Ā«04===SORRY!04=== Error while compiling /tmp/P3T_Fy6_zMā¤Malformed parameterā¤at /tmp/P3T_Fy6_zM:1ā¤------> 03sub foo(08ā04+x = 1) { x }; say foo();ā¤ expecting any of:ā¤ formal parameter Ā«exit code = 1Ā»Ā»
AlexDaniel` that's a bit better :)
timotimo we only support +@x, yeah?
ugexe yeah, its about the error message 17:28
Zoffix m: sub foo(+x) { dd x }(42)
camelia (42,)
timotimo or maybe the scheduler could see that cpu usage has been > 1% and slow down a little bit
or maybe if all queues are empty, it could sleep until any queue got items pushed to it? 17:29
Zoffix m: sub foo(*@x = 42) { dd @x }(42)
camelia 5===SORRY!5=== Error while compiling <tmp>
Cannot put default on slurpy parameter @x
at <tmp>:1
------> 3sub foo(*@x = 427ā5) { dd @x }(42)
expecting any of:
constraint
Zoffix m: sub foo(+@x = 42) { dd @x }(42) 17:30
camelia ===SORRY!===
At Frame 2, Instruction 1, op 'param_sp' has invalid number (3) of operands; needs 2.
AlexDaniel` here's the commit that added +@foo feature: github.com/rakudo/rakudo/commit/11...7406b6eb37
Zoffix K, I see the fix 17:31
Zoffix hackety hacks
lizmat timotimo: how do you know that the supervisor thread uses so much ? 17:34
yoleaux 15:49Z <jnthn> lizmat: The difference between hyper/race is whether we - at the end of the pipeline - just hand results back whenever they are ready, or instead ensure we hand back results relative to their input order. All the other differences that pop up are based on what you can get away with under race, but couldn't under hyper.
15:53Z <jnthn> lizmat: A Supply emitting results whenever they're available isn't a partiuclarly good design for hyper/race, as that implies concurrency control per result, not per batch of results.
timotimo time perl6 -e 'my $p = Proc::Async.new("echo"); $p.start; sleep 30' - 2% cpu 17:35
it's not actually terrible
AlexDaniel` I was also thinking about this line: github.com/rakudo/rakudo/commit/61...ec119fR462 17:36
how well is it optimized away when the env var is not set?
Zoffix It's just + sub scheduler-debug-status($message) { 17:37
+ if $scheduler-debug-status {
timotimo shouldn't be terrible, though you pay for a sub invocation each time
Zoffix + note "[SCHEDULER] $message";
+ }
+ }
AlexDaniel` timotimo: what about the construction of a str?
ugexe the `[max] (1..1000000)` bug can be avoided by using values.Slip instead of |values, but that breaks a spectest where it wants Bool::False from [^^] () eq Bool::False but fails because its inside a slip (so probably some signature tweak/reordering is needed) 17:38
here github.com/rakudo/rakudo/blob/nom/...ps.pm#L414 17:39
timotimo oh, right
if it were a macro that put a check for "do we want debug?" everywhere it'd skip that
it also does a sum over the last n measurements 17:40
though i imagine there'd be no time where $!general-queue and $!timer-queue would both be undefined but the supervisor is running anyway? 17:41
Zoffix there will be tonight when it's taught to prod the affinity workers too 17:42
AlexDaniel` timotimo: and yeah, I also noticed that it does too much when it shouldn't be doing anything 17:43
timotimo that's only interesting if it doesn't also pass the smoothed work time to the prodding sub 17:44
AlexDaniel` (cpu usage I mean) 17:45
timotimo i don't have good ideas here. except we could instantiate the utilization values to be 5 values and throw out the check for the size of the array :P 17:46
Zoffix Why exactly can't slurpies have default values? 17:49
Just NYI? 17:50
[Coke] (X::Comp) ah, it's just setting predclaration issues. 17:53
Zoffix Guess there's an issue on when a slurpy arg is meant to be assumed as "missing" to use the default. OK then 17:55
Geth rakudo/nom: a92950fb4f | (Zoffix Znet)++ | src/Perl6/Actions.nqp
Fix poor error with some slurpies with defaults

Bug find: irclog.perlgeek.de/perl6-dev/2017-...i_15352740
17:58
roast: 1a0162d8f0 | (Zoffix Znet)++ | S06-signature/defaults.t
Test all slurpies throw helpful error with defaults

Bug find: irclog.perlgeek.de/perl6-dev/2017-...i_15352740 Rakudo fix: github.com/rakudo/rakudo/commit/a92950fb4f
17:59
ugexe m: sub foo(+$x [$ is rw = False]) { $x }; say foo().perl; 18:10
camelia Unhandled exception: concatenate requires a concrete string, but got null
at SETTING::src/core/Exception.pm:395 (/home/camelia/rakudo-m-inst-1/share/perl6/runtime/CORE.setting.moarvm:print_exception)
from SETTING::src/core/Exception.pm:452 (ā€¦
lizmat afk again to Thor some Ragnarok& 18:11
timotimo oooh, i'm looking forward to seeing thor ragnarok, too 18:15
Zoffix Files as R##1211 18:18
Filed as R#1211
synopsebot R#1211 [open]: github.com/rakudo/rakudo/issues/1211 LTA error with `is rw` defaults in an unpacked slurpy
ugexe m: sub foo(+$x [$a is rw = False]) { $x }; say foo().perl # ftr 18:21
camelia 5===SORRY!5=== Error while compiling <tmp>
Cannot use 'is rw' on optional parameter '$a'.
at <tmp>:1
Zoffix add on the Issue :) 18:39
[Coke] is jealous again of liz getting to see Marvel stuff so much sooner. :) 18:45
why do we have Cursor if it's just an alias for Match? 18:49
m: Cursor.^mro.say ; Match.^mro.say;
camelia ((Match) (Capture) (Cool) (Any) (Mu))
((Match) (Capture) (Cool) (Any) (Mu))
timotimo it used to be different
both still exist so that existing code doesn't break
[Coke] I ask because that test in the docs for mro fails because Capture is unique. 18:50
timotimo ah, hmm
[Coke] (in having something other than itself as the first item in the mro.)
can we kill it in 6.d? 18:51
(I have the doc issue fixed locally, anyway) 19:02
m: say Failure.^mro 19:05
camelia ((Failure) Nil (Cool) (Any) (Mu))
[Coke] m: say ~::("Failure").^mro.map: *.^name; 19:06
camelia Failure Nil Cool Any Mu
Zoffix is using Synergy to KVM from a Windows 10 desktop to a Windows 7 laptop to remote-desktop into a Win7 desktop to ssh to a Ubuntu desktop to ssh to a Debian Wheezy server to git push a repo to gitlab to be able to pull it onto a Bodhi Linux VM running inside Windows 10 host 19:11
ZofBot: the future is here!
ZofBot Zoffix, I'd rather kill a black cat than lose you
samcv please don't ZofBot
Zoffix ZofBot: I rather you lose me than kill any cat :( 19:13
ZofBot Zoffix, 'Do it again,' said Sidney, all grin and sleek immaculateness
AlexDaniel`
.oO( ā€œI'd rather kill a blackā€¦ā€ O_O ā€œā€¦ catā€ ohā€¦ )
19:22
evalable6: what is wrong with you /o\ 19:25
evalable6 (exit code 1) 04===SORRY!04===
Unrecognized regex metacharacter \ (must be qā€¦
AlexDaniel`, Full output: gist.github.com/e89e70ef9209d0b8c1...fce45ce871
AlexDaniel` evalable6: WHAT <is wrong> with ā€˜youā€™ ~~ /o/ 19:26
evalable6
Zoffix <timotimo> that's only interesting if it doesn't also pass the smoothed work time to the prodding sub 20:08
So far I DOn't see the reason to pass it
and yeah even right now supervisor starts up when there only affinity workers 20:22
:/ my affinity worker fix worked the first time :\ 20:59
Wonder if that means I messed it up :P 21:00
[Coke] Zofbot-- 21:10
Zoffix jnthn: any ballpark for what to consider stealing the queue too early? My first take is 10 times we saw a busy worker not complete anything, which I'm guessing is about .1s
[Coke]: why? :(
timotimo perhaps time for exponential back-off? 21:12
or quadratic back-off who knows
Zoffix has no idea what that is...
[Coke] I can't get rid of him. at least it's only down to the occasional ping I see these days.
Zoffix But why do you hate it so much that you want to get rid of it? :) 21:13
jnthn Zoffix: 0.1 is a bit high for the first time we do it perhaps
Zoffix: But I guess get it to work at all and then we can tune it 21:14
Zoffix OK
ZOFVM: Files=1283, Tests=152773, 153 wallclock secs (21.03 usr 3.34 sys + 3302.43 cusr 167.90 csys = 3494.70 CPU) 21:18
jnthn Does it always fix the problem, or only sometimes? :) 21:19
Zoffix looks like always 21:21
Geth rakudo/nom: 43b7cfde31 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
Fix deadlock with affinity workers

Fixes github.com/tokuhirom/p6-WebSocket/...-339120879 and RT#132343: rt.perl.org/Ticket/Display.html?id=132343
Make supervisor keep an eye on affinity workers and if we spot any that are working and haven't completed anything for a while, steal their queue into general queue. Per:
  irclog.perlgeek.de/perl6-dev/2017-...i_15352262
21:22
synopsebot RT#132343 [open]: rt.perl.org/Ticket/Display.html?id=132343 [REGRESSION] better-sched and other async improvement ecosystem fallout
roast: e997143e09 | (Zoffix Znet)++ | MISC/bug-coverage-stress.t
Unfudge now-passing supply-in-a-sock deadlock test

Rakudo fix: github.com/rakudo/rakudo/commit/43b7cfde31
21:23
Zoffix jnthn: ^ that's the fix. Hope it doesn't do something insane:)
jnthn Zoffix: Yes, but please write concurrency code using normal if/loop etc rather than nqp:: ops 21:40
Oh, also I was going to steal one item at a time from the affinity queue
Also
$!state-lock.protect: {
Zoffix OK. I just saw nqp used around that area.
jnthn oh god 21:41
Zoffix uh-oh :P
jnthn Did I really do that
AlexDaniel` is dropping to bed. Have a nice * everyone
Zoffix \o
AlexDaniel` Zoffix: ā™„ your work 21:42
I* :)
jnthn No, I didn't 21:44
I...really, really do not wish to maintain code written using nqp::if, nqp::while, etc.
Zoffix jnthn: yeah, I'm changing it now and I'll make it take just 1 item 21:45
jnthn I don't mind it happening in other bits of CORE.setting, but concurrency stuff...it's hard enough as it is.
Zoffix noted :)
jnthn I don't think that lock is needed, btw
Unfortunately, there's a data race too 21:46
Well
Yeah, it could actually hang the supervisor if we're super unlucky (which means, it will happen some day...)
+ nqp::while( + nqp::elems($worker-queue), + nqp::push($!general-queue, nqp::shift($worker-queue))) 21:47
oops, that pasted horribly
Anyways, nqp::shift there takes from the work queue
But at this point, we're in a race with the affinity worker itself
So this can happen:
1. Supervisor calls nqp::elems, it's 1
2. Affinity worker gets done with its last work item, does a shift on the queue 21:48
3. Supervisor calls shift on an empty queue, blocks
A better way would be to use nqp::pollqueue($worker-queue) 21:49
Zoffix There's a comment at has Lock $!state-lock = Lock.new; that says "# All of the worker and queue state below is guarded by this lock." so I figured I had to lock if I touched $!general-queue
jnthn No, that's guarding the lists of workers
The queues themselves are actually concurrent queues
Zoffix Ah, ok
jnthn And so they are safe to use from many threads
Anyway, the poll thingy will return nqp::null if the queue is empty 21:50
Zoffix grep -FR 'pollqueue' . gives me nothing Is that right thing?
jnthn heh, no...I forgot what the op is really called :)
Zoffix ah queuepoll
jnthn oh, right :)
So instead of nqp::elems, just poll, if the thing that comes out isn't null then nqp::push it to the general queue 21:51
Zoffix OK
jnthn If it is null, there was nothing in the queue, so do nothing
nqp::queuepoll never blocks
So it can't ever hang the supervisor
Zoffix \o/
jnthn Those things aside, this looks sensible
Zoffix++ 21:52
Zoffix \o/ 21:53
jnthn Sleep time, 'night o/ 22:03
Zoffix \o 22:04
:/ t/spec/S17-channel/stress.t "hung" 22:15
hm, it's randomness-based 22:16
m: loop { my @p = < p e r l >.pick: *; if [!after] @p { dd @p; last } }; say now - INIT now 22:21
camelia Array @p = ["e", "l", "p", "r"]
0.0138634
Zoffix m: loop { my @p = < p e r l >.pick: *; if [!after] @p { dd @p; last } }; say now - INIT now
camelia Array @p = ["e", "l", "p", "r"]
0.0073785
Zoffix :S wonder why test is so slow. 22:22
4m58.317s and I had to kill it
gfldex even on a fast machine pick-sort can take a while :) 22:24
Zoffix but above it doesn't 22:25
I see now why AlexDaniel` pointed out perf of scheduler-debug-status... It gets called like a billion times 22:26
*gazillion (I didn't really count or anything)( 22:27
ZOFVM: Files=1283, Tests=152773, 156 wallclock secs (21.43 usr 3.84 sys + 3369.68 cusr 182.74 csys = 3577.69 CPU) 22:29
Ahhhh 22:30
When the box is busy with the rest of the stresstest, the heuristic for deadlock adds more workers and test passes. If the test ends up closer to end or whatever, the rest of the stresstest doesn't generate enough CPU use for deadlock heuristic to get triggered, so it never happens. I can't get it to complete when running with just RAKUDO_SCHEDULER_DEBUG=1 ./perl6 t/spec/S17-channel/stress.t on 24-core box 22:33
and on my home 4-core box I can't get it to complete 'cause it adds more workers but it also noms a ton of RAM. I run out of RAM before it gets a chance to complete
So it means there's another scheduler LTAness 22:34
Geth rakudo/nom: 59bfa5ab37 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
Polish off affinity worker prodder

Per irclog.perlgeek.de/perl6-dev/2017-...i_15353842
  - Improve readability
  - Steal only one item from the queue
  - Prevent potential supervisor deadlock from elems'ing the queue
   and having a race empty it and for nqp::shift() to block
22:35
Zoffix m: my @last-utils = ^5; for ^3 { @last-utils = @last-utils.rotate; @last-utils[0] = Int(rand*10); }; say @last-utils 23:07
camelia [9 4 0 8 3]
Zoffix m: my int @last-utils = ^5; for ^3 { @last-utils = @last-utils.rotate; @last-utils[0] = Int(rand*10); }; say @last-utils
camelia 4 4 0 0 1 2 3 4 6 6 2 3 4 0 0 1 2 3 4 4 4 3 4 0 0 1 2 3 4 6 6 2 3 4 0 0 1 2 3 4
Zoffix fun :")
Man the scheduler debug status thing is expensive 23:26
Gonna comment it out and after release add a #?debug build preprocessor directive and stick it under there 23:27
m, I guess not; doesn't even show up in profile, so screw it. 23:45
timotimo don't forget the profiler doesn't understand multithreaded programs yet
Zoffix ZOFFLOP: t/spec/S15-nfg/many-threads.t # segfaulted 23:46
timotimo so it's very unlikely that it'd show up even if it was a significant time cost
Zoffix timotimo: ohhh. right.
timotimo is still waiting impatiently for wrists to recover >_<
Zoffix In time measurement for the sub, it turns up at .233s for 100_000 iterations
m: say (0.233032/1000)
camelia 0.000233032
Zoffix So it costs us .2ms for every rakudo program 23:47
per second
Geth rakudo/nom: 27590e8bc7 | (Zoffix Znet)++ | src/core/ThreadPoolScheduler.pm
Make supervisor per-core-util calculator 2.8x faster

Most of this is from rewriteing &push to .push so it doesn't go through slippy candidate
23:48
Zoffix If I add the scheduler-debug-status() into the bench, that turns up as 2.3x faster and if I remove the scheduler-debug-status in the "new" version, it ends up 3.56x faster 23:49
Meh, gonna leave it in. With more users using the release with the new scheduler it might come in handy to ask users to dump the status 23:50
Zoffix looks into ./perl6 t/spec/S17-channel/stress.t issue...
timotimo: your wrists still sore? :o been a long time. 23:51
If I get RSI, I just stay 100% off the computer for a weekend and it goes away