🦋 Welcome to the IRC channel of the core developers of the Raku Programming Language (raku.org #rakulang). This channel is logged for the purpose of history keeping about its development | evalbot usage: 'm: say 3;' or /msg camelia m: ... | log inspection situation still under development | For MoarVM see #moarvm
Set by lizmat on 22 May 2021.
00:02 reportable6 left 00:46 patrickb left 01:05 reportable6 joined 02:05 evalable6 left, linkable6 left 02:07 linkable6 joined 02:32 frost joined 04:05 linkable6 left 04:06 linkable6 joined 04:08 evalable6 joined 06:02 reportable6 left 06:04 reportable6 joined 08:03 dogbert17 left 08:04 dogbert17 joined, frost left 08:07 frost joined 09:12 linkable6 left, evalable6 left 10:03 MasterDuke left 10:06 MasterDuke joined 10:14 linkable6 joined
Geth rakudo: 92c490ee4e | (Elizabeth Mattijsen)++ | src/core.c/IterationBuffer.pm6
Micro-optiize IterationBuffer.append(IterationBuffer:D)
10:14
10:15 evalable6 joined
Geth rakudo/IterationBuffer.unshift: 71409d0b26 | (Elizabeth Mattijsen)++ | src/core.c/IterationBuffer.pm6
Add IterationBuffer.unshift/prepend methods

Seems like a logical addition without interfering with the performance of the other IterationBuffer logic. And I have a use-case for it :-)
10:21
rakudo: lizmat++ created pull request #4641:
Add IterationBuffer.unshift/prepend methods
releasable6 Next release in ≈3 days and ≈7 hours. 3 blockers. Please log your changes in the ChangeLog: github.com/rakudo/rakudo/wiki/ChangeLog-Draft 11:00
lizmat ^^^ and we're still looking for a release manager... 11:02
if you're hanging out on this channel, pretty sure you have the (at least potential) chops to be one!
moon-child lizmat: I expect the primary bottleneck is time and energy, not capability 11:04
speaking for myself, at least
lizmat that goes for a lot of us :-)
12:02 reportable6 left 12:05 reportable6 joined
[Tux] updating my OS, so probably no timing today 12:34
12:34 squashable6 left
lizmat [Tux]: hope that that will be painless 12:35
12:50 Kaiepi left
lizmat m: dd ((1,2,3),(4,5,6)).map: *.Seq # is there a reason we don't transparently turn this into (1,2,3,4,5).Seq ? 12:58
camelia ((1, 2, 3).Seq, (4, 5, 6).Seq).Seq
12:59 Kaiepi joined, Kaiepi left 13:00 Kaiepi joined, Kaiepi left, Kaiepi joined
lizmat m: dd ((1,2,3),(4,5,6)).map: *.Slip # I know this will do it, but sadly that still produces *all* values of the list given 13:01
camelia (1, 2, 3, 4, 5, 6).Seq
lizmat m: dd (((1,2,3),(4,5,6)).map: { .say; .Slip }).head(4) # even if only partially needed
camelia (1 2 3)
(1, 2, 3, 4).Seq
(4 5 6)
13:02 Kaiepi left, Kaiepi joined 13:03 Kaiepi left, Kaiepi joined 13:05 Kaiepi left
lizmat It looks like supporting *.Seq to be lazily slipping into another Seq, would be just a matter of adding "|| nqp::istype($value,Seq)" in a few specific places 13:25
the problem with Slipping a Seq, is that it caches values (because Slip isa List) 13:26
13:26 Kaiepi joined
lizmat or maybe we just need to make Seq.Slip smarter / lazier 13:27
14:08 linkable6 left 14:10 linkable6 joined
Geth nqp: e2146395fb | niner++ (committed using GitHub Web editor) | 5 files
New disp nativecall (#742)

This can be used to conditionally compile backend specific code in modules like NativeCall
14:14
14:15 linkable6 left 14:17 linkable6 joined
Geth nqp: 7309a35683 | (Elizabeth Mattijsen)++ | tools/templates/MOAR_REVISION
Bump MoarVM to get the nativecall based on new-disp

  nine++ for a lot of work on that one!!
14:19
rakudo: ae8fec579a | niner++ (committed using GitHub Web editor) | 7 files
New disp nativecall (#4629)

Lets callable objects have a say in which dispatcher is used when they are invoked.
API for asking whether the compiler supports a certain nqp op. This can be used to conditionally compile backend specific code in modules like NativeCall
... (25 more lines)
14:21
14:22 linkable6 left 14:24 frost left 14:25 linkable6 joined
Geth rakudo: 505ffbfa1a | (Elizabeth Mattijsen)++ | tools/templates/NQP_REVISION
Bump NQP to get all of the NativeCall support
14:40
lizmat Looking forward to [Tux]'s inline::perl5 test-t results 14:41
15:03 [Tux] left 15:09 [Tux] joined 15:17 dogbert17 left 15:26 dogbert17 joined 15:36 squashable6 joined 15:46 Kaiepi left 15:47 Kaiepi joined 16:00 patrickb joined 16:25 rypervenche left 16:32 rypervenche joined 18:02 reportable6 left 19:03 reportable6 joined
bartolin I'm currently looking into github.com/Raku/nqp/issues/250. I've already noticed a couple of oddities with the output of nqp::encode on the JVM backend. When trying things out I'm comparing with the output on MoarVM. But doing that I've seen some things I don't understand, too. Maybe someone could enlighten me there? 19:39
m: use nqp; dd nqp::encode("ab", "utf16", buf8.new) ## this looks logical -- two elems in a 8-bit buffer for one utf16 code point 19:40
camelia Buf[uint8].new(97,0,98,0)
bartolin m: use nqp; dd nqp::encode("ab", "utf16", buf16.new) ## looks logical as well
camelia Buf[uint16].new(97,98)
bartolin m: use nqp; dd nqp::encode("ab", "utf16", buf32.new) ## but why is somehow added up to one big integer here 19:41
camelia Buf[uint32].new(6422625)
bartolin m: say 97 + 2**16 * 98
camelia 6422625
bartolin m: use nqp; dd nqp::encode("abc", "utf16", buf32.new) ## same result for two and three characters? 19:42
camelia Buf[uint32].new(6422625)
bartolin Or does that call with a buf32.new to nqp::encode make no sense? (But why?) 19:43
[Tux] Rakudo v2021.10-100-g505ffbfa1 (v6.d) on MoarVM 2021.10-71-gc779c6320
csv-ip5xs1.144 - 1.170
csv-ip5xs-206.242 - 6.255
csv-parser4.171 - 4.501
csv-test-xs-200.359 - 0.422
test7.032 - 7.206
test-t1.687 - 1.707
test-t --race0.973 - 0.985
test-t-2024.768 - 24.948
test-t-20 --race7.548 - 8.033
19:47
lizmat [Tux]: feels the machine is slower after the update ? 19:49
[Tux] or still busy with little things. I just wanted to prove it all still works 19:50
19:51 notna joined
MasterDuke but wow, big drop in csv-ip5xs-20, looks like it was 15 yesterday 19:51
lizmat logs.liz.nl/search.html?query=csv-...l=raku-dev 19:57
bartolin nine++ 20:12
returning to my earlier question: at least the "ab" vs. "abc" case seems to be caused by this line: github.com/MoarVM/MoarVM/blob/c779...ps.c#L1833 20:14
for three chars output_size is 6 and elem_size is 8 -- so only one elem is added.
m: use nqp; dd nqp::encode("abc", "utf16", buf64.new) ## output_size is 4, elem_size is 8 -> 0 elems 20:15
camelia Buf[uint64].new()
bartolin looks not correct to me
oh, it's actually these lines. but still. github.com/MoarVM/MoarVM/blob/c779...1840-L1841 20:24
20:33 Xliff joined 21:05 notna left
nine bartolin: looks like just no one has ever tried that 21:08
bartolin I'm testing a fix right now (just rounding up there)
m: use nqp; dd nqp::encode("a", "utf16", buf16.new) 21:09
camelia Buf[uint16].new(97)
bartolin So, I'd assume that this is just correct (both 16-bit values are combined as one value in the 32-bit buffer: 21:13
m: use nqp; dd nqp::encode("ab", "utf16", buf32.new)
camelia Buf[uint32].new(6422625)
bartolin the jvm backend gives Buf[uint16].new(97,98) there. So that needs to be changed, too ... 21:14
ok, my naive approach didn't work too well (some broken spectests). I think I'll open an issue for MoarVM then. 21:20
well, jvm gives Buf[uint32].new(97,98) for my last evaluation (messed that up while pasting the result). So the buffer size is right, but i doesn't combine the two 16-bit values. 21:24
Thanks for listening ;)
nine Thanks for exploring these corners :) 21:31
sena_kun github.com/rakudo/rakudo/issues/4643 21:39
22:58 vrurg_ joined 23:01 vrurg left 23:28 vrurg_ left, vrurg joined