[00:02] *** reportable6 left [00:46] *** patrickb left [01:05] *** reportable6 joined [02:05] *** evalable6 left [02:05] *** linkable6 left [02:07] *** linkable6 joined [02:32] *** frost joined [04:05] *** linkable6 left [04:06] *** linkable6 joined [04:08] *** evalable6 joined [06:02] *** reportable6 left [06:04] *** reportable6 joined [08:03] *** dogbert17 left [08:04] *** dogbert17 joined [08:04] *** frost left [08:07] *** frost joined [09:12] *** linkable6 left [09:12] *** evalable6 left [10:03] *** MasterDuke left [10:06] *** MasterDuke joined [10:14] *** linkable6 joined [10:14] ¦ rakudo: 92c490ee4e | (Elizabeth Mattijsen)++ | src/core.c/IterationBuffer.pm6 [10:14] ¦ rakudo: Micro-optiize IterationBuffer.append(IterationBuffer:D) [10:14] ¦ rakudo: review: https://github.com/rakudo/rakudo/commit/92c490ee4e [10:15] *** evalable6 joined [10:21] ¦ rakudo/IterationBuffer.unshift: 71409d0b26 | (Elizabeth Mattijsen)++ | src/core.c/IterationBuffer.pm6 [10:21] ¦ rakudo/IterationBuffer.unshift: Add IterationBuffer.unshift/prepend methods [10:21] ¦ rakudo/IterationBuffer.unshift: [10:21] ¦ rakudo/IterationBuffer.unshift: Seems like a logical addition without interfering with the performance [10:21] ¦ rakudo/IterationBuffer.unshift: of the other IterationBuffer logic. And I have a use-case for it :-) [10:21] ¦ rakudo/IterationBuffer.unshift: review: https://github.com/rakudo/rakudo/commit/71409d0b26 [10:21] ¦ rakudo: lizmat++ created pull request #4641: Add IterationBuffer.unshift/prepend methods [10:21] ¦ rakudo: review: https://github.com/rakudo/rakudo/pull/4641 [11:00] Next release in ≈3 days and ≈7 hours. 3 blockers. Please log your changes in the ChangeLog: https://github.com/rakudo/rakudo/wiki/ChangeLog-Draft [11:02] ^^^ and we're still looking for a release manager... [11:02] if you're hanging out on this channel, pretty sure you have the (at least potential) chops to be one! [11:04] lizmat: I expect the primary bottleneck is time and energy, not capability [11:04] speaking for myself, at least [11:04] that goes for a lot of us :-) [12:02] *** reportable6 left [12:05] *** reportable6 joined [12:34] <[Tux]> updating my OS, so probably no timing today [12:34] *** squashable6 left [12:35] [Tux]: hope that that will be painless [12:50] *** Kaiepi left [12:58] m: dd ((1,2,3),(4,5,6)).map: *.Seq # is there a reason we don't transparently turn this into (1,2,3,4,5).Seq ? [12:58] rakudo-moar 92c490ee4: OUTPUT: «((1, 2, 3).Seq, (4, 5, 6).Seq).Seq␤» [12:59] *** Kaiepi joined [12:59] *** Kaiepi left [13:00] *** Kaiepi joined [13:00] *** Kaiepi left [13:00] *** Kaiepi joined [13:01] m: dd ((1,2,3),(4,5,6)).map: *.Slip # I know this will do it, but sadly that still produces *all* values of the list given [13:01] rakudo-moar 92c490ee4: OUTPUT: «(1, 2, 3, 4, 5, 6).Seq␤» [13:01] m: dd (((1,2,3),(4,5,6)).map: { .say; .Slip }).head(4) # even if only partially needed [13:01] rakudo-moar 92c490ee4: OUTPUT: «(1 2 3)␤(1, 2, 3, 4).Seq␤(4 5 6)␤» [13:02] *** Kaiepi left [13:02] *** Kaiepi joined [13:03] *** Kaiepi left [13:03] *** Kaiepi joined [13:05] *** Kaiepi left [13:25] It looks like supporting *.Seq to be lazily slipping into another Seq, would be just a matter of adding "|| nqp::istype($value,Seq)" in a few specific places [13:26] the problem with Slipping a Seq, is that it caches values (because Slip isa List) [13:26] *** Kaiepi joined [13:27] or maybe we just need to make Seq.Slip smarter / lazier [14:08] *** linkable6 left [14:10] *** linkable6 joined [14:14] ¦ nqp: e2146395fb | niner++ (committed using GitHub Web editor) | 5 files [14:14] ¦ nqp: New disp nativecall (#742) [14:14] ¦ nqp: [14:14] ¦ nqp: This can be used to conditionally compile backend specific code in [14:14] ¦ nqp: modules like NativeCall [14:14] ¦ nqp: review: https://github.com/Raku/nqp/commit/e2146395fb [14:15] *** linkable6 left [14:17] *** linkable6 joined [14:19] ¦ nqp: 7309a35683 | (Elizabeth Mattijsen)++ | tools/templates/MOAR_REVISION [14:19] ¦ nqp: Bump MoarVM to get the nativecall based on new-disp [14:19] ¦ nqp: [14:19] ¦ nqp: nine++ for a lot of work on that one!! [14:19] ¦ nqp: review: https://github.com/Raku/nqp/commit/7309a35683 [14:21] ¦ rakudo: ae8fec579a | niner++ (committed using GitHub Web editor) | 7 files [14:21] ¦ rakudo: New disp nativecall (#4629) [14:21] ¦ rakudo: [14:21] ¦ rakudo: Lets callable objects have a say in which dispatcher is used when [14:21] ¦ rakudo: they are invoked. [14:21] ¦ rakudo: [14:21] ¦ rakudo: API for asking whether the compiler supports a certain nqp op. This can be used to conditionally compile backend specific code in modules like NativeCall [14:21] ¦ rakudo: [14:22] ¦ rakudo: <…commit message has 25 more lines…> [14:22] ¦ rakudo: review: https://github.com/rakudo/rakudo/commit/ae8fec579a [14:22] *** linkable6 left [14:24] *** frost left [14:25] *** linkable6 joined [14:40] ¦ rakudo: 505ffbfa1a | (Elizabeth Mattijsen)++ | tools/templates/NQP_REVISION [14:40] ¦ rakudo: Bump NQP to get all of the NativeCall support [14:40] ¦ rakudo: review: https://github.com/rakudo/rakudo/commit/505ffbfa1a [14:41] Looking forward to [Tux]'s inline::perl5 test-t results [15:03] *** [Tux] left [15:09] *** [Tux] joined [15:17] *** dogbert17 left [15:26] *** dogbert17 joined [15:36] *** squashable6 joined [15:46] *** Kaiepi left [15:47] *** Kaiepi joined [16:00] *** patrickb joined [16:25] *** rypervenche left [16:32] *** rypervenche joined [18:02] *** reportable6 left [19:03] *** reportable6 joined [19:39] I'm currently looking into https://github.com/Raku/nqp/issues/250. I've already noticed a couple of oddities with the output of nqp::encode on the JVM backend. When trying things out I'm comparing with the output on MoarVM. But doing that I've seen some things I don't understand, too. Maybe someone could enlighten me there? [19:40] m: use nqp; dd nqp::encode("ab", "utf16", buf8.new) ## this looks logical -- two elems in a 8-bit buffer for one utf16 code point [19:40] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint8].new(97,0,98,0)␤» [19:40] m: use nqp; dd nqp::encode("ab", "utf16", buf16.new) ## looks logical as well [19:40] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint16].new(97,98)␤» [19:41] m: use nqp; dd nqp::encode("ab", "utf16", buf32.new) ## but why is somehow added up to one big integer here [19:41] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint32].new(6422625)␤» [19:41] m: say 97 + 2**16 * 98 [19:41] rakudo-moar 505ffbfa1: OUTPUT: «6422625␤» [19:42] m: use nqp; dd nqp::encode("abc", "utf16", buf32.new) ## same result for two and three characters? [19:42] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint32].new(6422625)␤» [19:43] Or does that call with a buf32.new to nqp::encode make no sense? (But why?) [19:47] <[Tux]> Rakudo v2021.10-100-g505ffbfa1 (v6.d) on MoarVM 2021.10-71-gc779c6320 [19:47] <[Tux]> csv-test-xs-20 0.359 - 0.422 [19:47] <[Tux]> test-t --race 0.973 - 0.985 [19:47] <[Tux]> csv-ip5xs 1.144 - 1.170 [19:47] <[Tux]> test-t 1.687 - 1.707 [19:47] <[Tux]> csv-parser 4.171 - 4.501 [19:47] <[Tux]> csv-ip5xs-20 6.242 - 6.255 [19:47] <[Tux]> test 7.032 - 7.206 [19:47] <[Tux]> test-t-20 --race 7.548 - 8.033 [19:47] <[Tux]> test-t-20 24.768 - 24.948 [19:49] [Tux]: feels the machine is slower after the update ? [19:50] <[Tux]> or still busy with little things. I just wanted to prove it all still works [19:51] *** notna joined [19:51] but wow, big drop in csv-ip5xs-20, looks like it was 15 yesterday [19:57] https://logs.liz.nl/search.html?query=csv-ip5xs&type=starts-with&nicks=&channel=raku-dev [20:12] nine++ [20:14] returning to my earlier question: at least the "ab" vs. "abc" case seems to be caused by this line: https://github.com/MoarVM/MoarVM/blob/c779c6320a/src/strings/ops.c#L1833 [20:14] for three chars output_size is 6 and elem_size is 8 -- so only one elem is added. [20:15] m: use nqp; dd nqp::encode("abc", "utf16", buf64.new) ## output_size is 4, elem_size is 8 -> 0 elems [20:15] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint64].new()␤» [20:15] looks not correct to me [20:24] oh, it's actually these lines. but still. https://github.com/MoarVM/MoarVM/blob/c779c6320a/src/strings/ops.c#L1840-L1841 [20:33] *** Xliff joined [21:05] *** notna left [21:08] bartolin: looks like just no one has ever tried that [21:08] I'm testing a fix right now (just rounding up there) [21:09] m: use nqp; dd nqp::encode("a", "utf16", buf16.new) [21:09] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint16].new(97)␤» [21:13] So, I'd assume that this is just correct (both 16-bit values are combined as one value in the 32-bit buffer: [21:13] m: use nqp; dd nqp::encode("ab", "utf16", buf32.new) [21:13] rakudo-moar 505ffbfa1: OUTPUT: «Buf[uint32].new(6422625)␤» [21:14] the jvm backend gives Buf[uint16].new(97,98) there. So that needs to be changed, too ... [21:20] ok, my naive approach didn't work too well (some broken spectests). I think I'll open an issue for MoarVM then. [21:24] well, jvm gives Buf[uint32].new(97,98) for my last evaluation (messed that up while pasting the result). So the buffer size is right, but i doesn't combine the two 16-bit values. [21:24] Thanks for listening ;) [21:31] Thanks for exploring these corners :) [21:39] https://github.com/rakudo/rakudo/issues/4643 [22:58] *** vrurg_ joined [23:01] *** vrurg left [23:28] *** vrurg_ left [23:28] *** vrurg joined