samcv jnthn: any chance you can look at my rakudo PR github.com/rakudo/rakudo/pull/1617 you were the one who last edited those files 00:03
jnthn samcv: reviewed 00:15
Geth rakudo: 6458811a89 | (Samantha McVey)++ | 4 files
Add strict, replacement options for IO::Handle, Str.encode & other fixes

  * Combine Encoding::Encoder::Builtin::Replacement and Encoding::Encoder::Builtin
   into one class. They were essentially duplicated, and we can achieve the same
   goal without adding any additional overhead.
  * Encoding.decoder: make sure :!replacement does not attempt to use a
... (12 more lines)
01:30
rakudo: c9c5f34130 | (Samantha McVey)++ | src/core/Rakudo/Internals.pm6
Fix bug mapping windows1252/windows1251 without a dash

This bug did not result in any regression, since we did not accept windows-1252 or windows-1251 without the dash until very recently. By accident windows1252 was mapping to windows-1251 and windows1251 was mapping to windows-1252.
rakudo: 3420b67b67 | (Samantha McVey)++ (committed using GitHub Web editor) | 5 files
Merge pull request #1617 from samcv/encoder

Add strict, replacement options for IO::Handle, Str.encode & other fixes
nqp: 68164ca223 | (Stefan Seifert)++ | t/concurrency/03-semaphore.t
Hopefully fix flapping semaphore test

There was a race condition on the $released variable. $t6 released the semaphore and only then set the $released variable to 1. It's possible that
  $t5 ackquired the semaphore and checked the $released variable between this
release and set. So the test would fail because $released was still 0. First set the variable and then release the semaphore to fix the race.
07:45
nine .ask jnthn do you concur with the reasoning? github.com/perl6/nqp/commit/68164ca223 07:46
yoleaux nine: I'll pass your message to jnthn.
travis-ci NQP build passed. Stefan Seifert 'Hopefully fix flapping semaphore test 08:07
travis-ci.org/perl6/nqp/builds/355731684 github.com/perl6/nqp/compare/a2f66...164ca22305
lizmat Files=1236, Tests=76153, 316 wallclock secs (14.95 usr 5.35 sys + 2172.34 cusr 216.08 csys = 2408.72 CPU) 08:30
Geth nqp: 200e4cf4ed | (Stefan Seifert)++ | t/concurrency/01-thread.t
Fix concurrency failure in threadyield sibling test

Pushing onto an array is not an atomic operation. The array's storage may have to be expanded first before the new element can be added. Thus a simple check for nqp::elems(@a) is not enough to ensure that the push is already done. We also have to check for the actual end result before we may continue. Otherwise two push operations may run concurrently and one of the updates may be lost.
09:40
rakudo: 34b294d2e1 | (Elizabeth Mattijsen)++ | src/core/Rakudo/Internals/HyperToIterator.pm6
Give HyperToIterator its own push-all

Should help a bit in the my @a = foo.hyper.bar case.
09:44
rakudo: ae0cbc3020 | (Elizabeth Mattijsen)++ | src/core/Rakudo/Internals/HyperToIterator.pm6
Fix issue with HyperToIterator.skip-at-least

The $!current-items may already have items that we need to skip. So get any new items *after* we have skipped any from the current items.
nqp: 66be87d6df | (Stefan Seifert)++ | t/concurrency/01-thread.t
Fix potential "test run out of order" in 01-thread.t

Without synchronization, it may happen that two ok() run concurrently and confuse the test count with one ok() coming in between the update of the global test counter and the actual print of the test result.
09:48
nine Ah concurrency. Always fun. 09:49
|Tux| Rakudo version 2018.03-17-gae0cbc302 - MoarVM version 2018.03-3-g2bd691551
csv-ip5xs0.820 - 0.821
csv-ip5xs-207.770 - 7.807
csv-parser36.465 - 37.118
csv-test-xs-200.449 - 0.463
test8.368 - 9.004
test-t2.568 - 2.580
test-t --race1.054 - 1.058
test-t-2046.132 - 46.536
test-t-20 --race15.992 - 16.790
11:24
Geth nqp: 8f10cb746b | (Stefan Seifert)++ | t/concurrency/01-thread.t
Fix brown paper bag bug in commit 66be87d6df893a6849d07312f8f125610ebfd7f2

Of course we are only done after we are actually done printing the test result...
11:44
rakudo: ec5416a9a3 | (Elizabeth Mattijsen)++ | src/core/Rakudo/Internals/HyperToIterator.pm6
Replace HyperToIterator.push-all by push-exactly

The difference between the two is just one int condition and one int assignment per HyperWorkBatch, which is neatly shown in the diff of this commit. The extra work, at least currently, completely gets lost in the noise.
Also fix superfluous check and return value in skip-at-least: we only exit the loop through -return- anyway.
11:50
specs: 690bf9b60e | (Zoffix Znet)++ (committed using GitHub Web editor) | html/perl-with-historical-message.css
Show line number anchors on hover

for easy linkage
12:57
lizmat I have a hard time understanding whether this is correct or not: 13:06
m: dd [[2, 3], [4, [5, 6]]]Ā».produce(&[+])
camelia [[(2,).Seq, (3,).Seq], [(4,).Seq, [(5,).Seq, (6,).Seq]]]
Zoffix lizmat: yeah, looks right. The hyper discends into iterables, so you're calling .produce(&[+]) on each of those 5 numbers individually 13:15
oh 13:16
jnthn Hm, but maybe produce should be nodal?
Zoffix well, except for the last one. Doesn't descending stops with content
conts I meant
m: dd [5, 6].produce(&[+])
camelia (5, 11).Seq
Zoffix m: dd [4, [5, 6]].produce(&[+]) 13:17
camelia (4, 6).Seq
Zoffix m: say [[2, 3], [4, [5, [1, 2, 3], 6]]]Ā».&{.elems}' 13:18
camelia 5===SORRY!5=== Error while compiling <tmp>
Strange text after block (missing semicolon or comma?)
at <tmp>:1
------> 3, 3], [4, [5, [1, 2, 3], 6]]]Ā».&{.elems}7ā5'
expecting any of:
infix
infix stopper
ā€¦
Zoffix m: say [[2, 3], [4, [5, [1, 2, 3], 6]]]Ā».&{.elems}
camelia [[1 1] [1 [1 [1 1 1] 1]]]
Zoffix guess not :S
m: dd [[2, 3], [4, [5, 6]]].map: *.produce(&[+]) 13:20
camelia ((2, 5).Seq, (4, 6).Seq).Seq
Zoffix m: List.^lookup("produce").^mixin: role { method nodal {} }; dd [[2, 3], [4, [5, 6]]]Ā».produce(&[+])
camelia ((2, 5).Seq, (4, 6).Seq)
Zoffix e: List.^methods(:all).grep(!*.can: "nodal").sort(*.name)Ā».say 13:24
evalable6 ACCEPTS
ACCEPTS
ACCEPTS
ASSIGN-POS
BIND-POS
BIND-POS
BUILDALL
BUILDALL
BUILD_LEAST_DERIVEDā€¦
Zoffix, Full output: gist.github.com/c28a66d953666e2e6c...1d5e04d2f2
lizmat jnthn Zoffix: well, that's the thing: produce currently has a proto *not* marked "is nodal", and a candidate that *is* 13:27
I was streamlining it so that the proto would have the nodal (like we do in most cases)
and then I had 2 test started failing 13:28
*tests
Zoffix Yeah, +1 on reduce/produce being nodal, seem to be the only ones in that list that are kinda like outliers (and maybe .head/.tail too, since .AT-POS is?)
m: say List.^lookup("produce").candidates.map: ?*.can: "nodal" 13:29
camelia (True)
Zoffix m: say List.^lookup("produce").can: "nodal"
camelia ()
Zoffix It's the whole mixing in a trait goes into that particular routine and not proto thing again. 13:30
There are two branchs with trial works RT132710-traits-warn-on-non-proto and RT132710-traits-from-all-multies IIRC niether works well 13:31
lizmat jnthn: so you agree the result should be: ((2, 5).Seq, (4, 6).Seq) ?
synopsebot RT#132710 [open]: rt.perl.org/Ticket/Display.html?id=132710 [LTA] Warning message for duplicated tighter trait
lizmat fwiw, that's the result I get if I make it an "only" method with an "is nodal" 13:32
Zoffix "Both ideas are crap and don't quite work out." yeah, looks like there are problems with both
The test descriptions in ./S03-metaops/hyper.t seem to be reversed. They say "blah blah is nodal" but tests that they're not nodal. 13:33
(or is it our impl that's inverted?) 13:34
lizmat not sure, the tests were written by TimToady in Oct 2015 13:35
commit 7ea9ae0fb1752729b 13:36
Zoffix Oh I can't read. I was looking at original data, not the tested result...
lizmat: but yeah, for produce and reduce, the test is testing they're nodal, but the result it's testing with shows it NOT being nodal. Looks like a copy-pasta error with copying produced result from the code. 13:38
lizmat yeah, that's my feeling as well
Zoffix lizmat: oh and .flat too 13:42
jnthn lizmat: Not sure, don't have the spare time to think about it properly at the moment
lizmat yeah, it's confusing
I'll just commit and see what TimToady thinks of it :-)
Geth nqp: 32fc3e8f17 | (Stefan Seifert)++ | t/concurrency/01-thread.t
Port fix for threadyield sibling test to parent-child test

The threadyield parent-child test is structured similarily to the faulty sibling test, thus the same fix applies.
13:43
nine Kinda odd that the sibling test failed so often while the same issue in the parent-child test was really hard to reproduce.
Geth rakudo: 08eb465f9f | (Elizabeth Mattijsen)++ | src/core/Any-iterable-methods.pm6
Make .produce/.reduce nodal the correct way

The intent seems to be that they should be nodal, but they way this was implemented, hid the "is nodal" functionality.
Simplify the actually working candidate to not need a temporary variable or an -if- statement: if the lookup for the reducer fails, a Failure will ... (11 more lines)
13:45
lizmat GH #1633 13:53
GH R#1633
synopsebot R#1633 [open]: github.com/rakudo/rakudo/issues/1633 Nodality of .produce / .reduce
lizmat m: dd ^10 .race.hyper # why isn't this a RaceSeq? If the order of produced values is already indeterminate, why change it to a HyperSeq? 14:21
camelia HyperSeq.new(configuration => HyperConfiguration.new(batch => 64, degree => 4))
jnthn Because .race and .hyper set the processing mode for the operations that *follow* them 14:24
They don't themselves do anything 14:25
lizmat yeah, I got that
but what is the point of using hyper mode when the data given is already in an indeterminate order ?
jnthn Because the next thing will be a sort. Or will produce Slips. Or will map and then flatten. 14:26
lizmat but wouldn't those end hyper processing anyway, aka produce a serial Seq ? 14:27
jnthn Well, I'd expect RaceSeq sort to just call .hyper.sort
lizmat I seem to missing something in this discussion :-)
jnthn The other two cases - how could we know? 14:28
lizmat .hyper.sort would produce a Seq, right ?
jnthn No
I mean, it does *now* 14:29
That's because we didn't yet implement all the various operations on HyperSeq/RaceSeq
lizmat I'm not sure how it could ever *not* produce a Seq
.hyper.sort that is
I seem to missing something in this discussion :-)
fwiw, I'm looking at implementing various ops on HyperSeq/RaceSeq :-) 14:30
jnthn It should produce a HyperSeq, so operations beyond that point are parallelized
Note that it may also sort batches in parallel and then merge them
lizmat ok, so the .hyper is contagious in that sense 14:31
jnthn Yes
lizmat even though the result of .hyper.sort is essentially a Seq
I mean, the .sort would have to finish before it can start batching again
jnthn Well, in the "we have to see all the values before we can produce the first batch of result values" sense
lizmat right 14:32
jnthn Not really, if you're doing a merge sort then the final merge can produce batches as soon as it's got that many elements merged
And downstream processing of those can get on with the job
lizmat ah, ok, *that*'s what I was missing :-) 14:33
Geth rakudo: 7af3b648da | (Elizabeth Mattijsen)++ | src/core/Rakudo/Internals/HyperWorkStage.pm6
Make a HyperBatcher responsible for its own sequencing

Some HyperBatchers may not need a sequence number as part of the Batcher. Having it in a role, and currently not being able to use writable native int attributes from a role, made the sequence number fetching appear quite high in profiles. This was fixed a while ago in the default HyperIteratorBatcher by giving it its own native int sequence number.
14:48
lizmat jnthn: re Rakudo::Internals::HyperIteratorBatcher, I don't really see why that shouldn't be a class, rather than a role 14:52
it doesn't define any additional interface either 14:54
jnthn Hmm, it does HyperBatcher? 14:55
m: say Rakudo::Internals::HyperIteratorBatcher.^roles
camelia ((HyperBatcher) (HyperWorkStage))
jnthn Hah, yes
I'm not sure why it's a role either. HyperBatcher is the abstraction
lizmat that's what I thought :-) 14:56
jnthn (And exists so we can implement alternate, optimized, batchers for arrays/ranges etc.)
lizmat that's what I was looking at :-)
jnthn Could just be copy-pasta :)
lizmat yeah, looks like
Geth rakudo: 8e1366e7a9 | (Elizabeth Mattijsen)++ | 2 files
Fix apparent copy-pasta

The HyperIteratorBatcher is a consumer of the HyperBatcher role, but does not itself need to be a role, or so it seems.
15:21
lizmat afk for a few hours&
[Coke] Build Failure on ports-10.6_i386_legacy: MoarVM, libatomic_ops, nqp, rakudo 15:33
not sure if we have any os x 10.6 users in the house. (I'm on at least 10.12 everywhere) 15:34
... and 10.5
Geth specs: cde208bde0 | (Zoffix Znet)++ (committed using GitHub Web editor) | html/perl-with-historical-message.css
Tweak line number shower
16:25
rakudo: 6ca3920edc | (Aleks-Daniel Jakimenko-Aleksejev)++ | t/packages/Test/Helpers.pm6
Be more patient in doesn't-hang (by default)

1.5s may be too low on some systems, especially under load. This number will only affect things that hang, so we can set it very high
  (because normally tests should pass).
  See #1257.
17:39
synopsebot RAKUDO#1257 [open]: github.com/rakudo/rakudo/issues/1257 [regression][severe] Rakudo 2017.10 fails to build on Debian big endian systems
rakudo: df85388d3f | (Zoffix Znet)++ (committed using GitHub Web editor) | t/packages/Test/Helpers.pm6
Sync `doesn't-hang` helper with latest roast version

  - Add diag info about re-scaling timing on failure
  - Absolutify executable path (IIRC that fixes some errors)
  - Rescale with larger multiplier for JVM
17:51
nqp: eba3922cbd | usev6++ | src/vm/jvm/runtime/org/perl6/nqp/io/FileHandle.java
[JVM] Remove some special cases for .open modes

Some combinations of arguments to .open that didn't work to well with Java's StandardOpenOptions have been removed from roast with
  github.com/perl6/roast/commit/37aa7c3d7f. So some recently
added special cases for resolveOpenMode are no longer necessary.
19:57
rakudo: 9532e9c327 | (Elizabeth Mattijsen)++ | 2 files
Move parameter checking to HyperConfiguration.new

Whether :batch and :degree are valid or not, is really best checked inside the HyperConfiguration class itself. Initially I moved the Iterable!valid-hyper-race to HyperConfiguration, but then it felt contrived. So bit the bullet and moved the check to a TWEAK submethod.
This should make it easier to create .hyper / .race methods in other classes, such as Range, Array, List, etc.
21:10
dogbert11 t/spec/S03-metaops/hyper.t fails two tests atm 21:54
lizmat dogbert11: yes, see GH R#1633 22:20
synopsebot R#1633 [open]: github.com/rakudo/rakudo/issues/1633 Nodality of .produce / .reduce
Geth rakudo/js: 934 commits pushed by (Tom Browder)++, (Elizabeth Mattijsen)++, (Jonathan Worthington)++, (Daniel Green)++, (Aleks-Daniel Jakimenko-Aleksejev)++, (Samantha McVey)++, usev6++, (Nick Logan)++, (Zoffix Znet)++, (Alex Chen)++, lizmat++, (Moritz Lenz)++, (Timo Paulssen)++, (Martin Barth)++, (Fernando Correa)++, (Fernando Correa de Oliveira)++, TimToady++, (Tim Smith)++, (Ben Davies)++, (Jeremy Studer)++, MasterDuke17++, (Itsuki Toyota
review: github.com/rakudo/rakudo/compare/9...0befd5e64b
23:30