🦋 Welcome to the IRC channel of the core developers of the Raku Programming Language (raku.org #rakulang). This channel is logged for the purpose of history keeping about its development | evalbot usage: 'm: say 3;' or /msg camelia m: ... | Logs available at irclogs.raku.org/raku-dev/live.html | For MoarVM see #moarvm
Set by lizmat on 8 June 2022.
00:04 discord-raku-bot left 00:05 discord-raku-bot joined 02:11 MasterDuke joined
MasterDuke man, that hyper timing result is weird... 02:59
hm. the non-hyper version did 1276 GCs, but the hyper version did only 184 03:03
but the hyper version did a full GC, so the time spent in GC was only ~2x for the non-hyper version 03:04
the hyper profile numbers are all a bit suspect though, since many of them are noticeably corrupted (we don't really support profiling multi-threaded programs) 03:06
huh. the non-hyper version shows a cpu usage that's never greater than 100% (according to top). but the hyper version even with `degree => 1` shows slightly over 100%, like 107%. so maybe it is starting another/new thread and not competing with the main and/or spesh thread? 03:12
lizmat MasterDuke: yeah, it's weird 09:22
09:35 epony left 09:50 MasterDuke left 10:18 sena_kun joined 10:40 patrickb left 10:44 patrickb joined 10:55 epony joined
lizmat So, looking at snapper output: gist.github.com/lizmat/879d924d3bc...047bd0b034 11:04
the CPU usage for both is the same per time period, it's just that the hyper => 1 case is much faster 11:06
so I wonder: either the CPU telemetry is wrong at a very low level (which I doubt) 11:09
or: when running on a worker thread, maybe the code gets run on a CPU that's much more capable at handling this particular workload 11:10
if that's the case, I wonder if it would make sense to *always* run the mainline on a worker thread 11:11
aka, wrap the mainline in an: await Promise.start(&mainline) 11:12
11:17 Altai-man joined 11:18 patrickb left, patrickb joined 11:20 jdv left 11:21 jdv joined, sena_kun left
ugexe that seems unlikely imo 13:59
lizmat m: say (^Inf).grep(*.is-prime).skip(999999).head; say now - ENTER now 14:01
camelia 15485863
4.487197684
14:02
lizmat m: await start say (^Inf).grep(*.is-prime).skip(999999).head; say now - ENTER now
camelia 15485863
5.013915359
lizmat so the only thing different between ^^ and the hyper => 1 case, is that the worker thread gets many more separate jobs, instead of a single big onne 14:04
(looking at snapper output)
jdv looks like the blin blockers haven't been fixed in time 14:34
so we either ship today and any fixes have to wait til the jan release in about a month or we defer
lizmat looks at the blin blockers 14:35
jdv looks like vrurg is into it now 14:39
vrurg lizmat: I'm about to commit a fix for Tier if spectest is OK locally. 14:40
*Trie 14:45
lizmat vrurg: I think the problem is related to my int64 @ats[$TZ_MAX_TIMES]; 14:47
where my $TZ_MAX_TIMES = 2000;
and/or possibly has int8 @.types[$TZ_MAX_TIMES] 14:49
m: my $size = 10; my @a[$size]; my int @b = ^10; @a = @b # vrurg golf! 14:51
camelia Cannot assign an array of shape * to an array of shape 10
in block <unit> at <tmp> line 1
lizmat both need to be native types 14:52
m: my $size = 10; my @a[$size]; my @b = ^10; @a = @b
camelia ( no output )
lizmat m: my $size = 10; my @a[$size]; my @b = ^10; @a = @b; dd @a
camelia Array.new(:shape(10,), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
lizmat actually the source needs to be native 14:53
the destination must not be
m: my $size = 10; my int @a[$size]; my int @b = ^10; @a = @b; dd @a
camelia array[int].new(:shape(10,), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Geth rakudo/main: 52aec48cdb | (Vadim Belman)++ | 3 files
Fix arrays and hashes sharing same backing storage

Instantiation of a concrete collection type tries to preserve content of the generic original by passing the original into constructor call of the instantiation. While not being a big deal on its own when mapping 1-to-1, this is major problem when the instantiation is used as attribute container initializer as in this case the object gets ... (5 more lines)
14:54
lizmat vrurg: this does not fix: 14:56
m: my $size = 10; my @a[$size]; my int @b = ^10; @a = @b
camelia Cannot assign an array of shape * to an array of shape 10
in block <unit> at <tmp> line 1
vrurg lizmat: As I mentioned in the issue comment: TimeZone is in works. 15:00
lizmat sorry, missed that, hope that my golf helped 15:01
confirmed Trie is ok now 15:04
vrurg The golf will make it much easier for sure. Thanks!
lizmat afk for a few hours& 15:05
vrurg lizmat: the golf, actually fails both on 2023.10 and 2023.11. I never used shaped arrays, but the failure looks as a legitimate one. But I'd need to figure out what broke the module. 15:18
[Tux] Rakudo v2023.11-99-g52aec48cd (v6.d) on MoarVM 2023.11-1-gbe03e26fc
csv-ip5xs0.268 - 0.275
csv-ip5xs-201.133 - 1.165
csv-parser1.510 - 1.515
csv-test-xs-200.141 - 0.143
test1.932 - 1.939
test-t0.420 - 0.427
test-t --race0.284 - 0.286
test-t-205.169 - 5.247
test-t-20 --race1.219 - 1.249
15:25
vrurg lizmat: With TimeZone, it's not a regression, I unintentionally fixed a bug, that was letting the module to work. :) 15:27
15:29 epony left 15:31 epony joined
jdv thanks lizmat and vrurg. i gotta run but i'll get to the release in a few hours. 15:49
[Coke] jdv++ 16:32
lizmat bisectable6 old=2023.02 my @a[10]; my int @b = ^10; @a = @b 17:43
bisectable6: old=2023.02 my @a[10]; my int @b = ^10; @a = @b
bisectable6 lizmat, On both starting points (old=2023.02 new=52aec48) the exit code is 1 and the output is identical as well
lizmat, Output on both points: «Cannot assign an array of shape * to an array of shape 10␤ in block <unit> at /tmp/LTS1kkJKRW line 1␤␤»
lizmat interesting
m: my @a[10] = ^10; dd @a # if this is ok 17:44
camelia Array.new(:shape(10,), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
lizmat m: my @a[10]; my @b = ^10; @a = @b; dd @a # and this is ok
camelia Array.new(:shape(10,), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
lizmat m: my @a[10]; my int @b = ^10; @a = @b; dd @a # then this should be ok as well 17:45
camelia Cannot assign an array of shape * to an array of shape 10
in block <unit> at <tmp> line 1
lizmat but I will take that on after the 2023.12 release, as it clearly is a longer standing issue
.oO( sometimes I wonder whether we could use a parameter attribute trait to indicated it should be deconted, rather than reconted
18:41
)
s/attribute// 19:12
jdv changelogs are up. will begin the actual releases in a bit. 20:26
Geth rakudo/release-2023.12: 7f4beaadf6 | (Justin DeVuyst)++ | 3 files
Update changelog + announcement

Deliberately not logged:
  [73470580][2e21b9d7][f562e6b1][ccd375b3][bd9e0be0][e305eb59]
  [69d61680][7f97bdd4][ce46b15c][28ebb7ac][d2f8bb6f][1b73d18d]
  [20c0c5e0][dd239016][a6c36008][7a92c6f5][3151bca6][3434e54d]
  [f9586cea][e1a86f11][a01033db][621709b3][ffd7b14a][0a62cd70]
  [b5455721][c3e68caa][9b65e880][4e0bd4f9]
21:28
rakudo/release-2023.12: e6b0e99947 | (Justin DeVuyst)++ | 3 files
Update changelog + announcement

Deliberately not logged:
  [73470580][2e21b9d7][f562e6b1][ccd375b3][bd9e0be0][e305eb59]
  [69d61680][7f97bdd4][ce46b15c][28ebb7ac][d2f8bb6f][1b73d18d]
  [20c0c5e0][dd239016][a6c36008][7a92c6f5][3151bca6][3434e54d]
  [f9586cea][e1a86f11][a01033db][621709b3][ffd7b14a][0a62cd70]
  [b5455721][c3e68caa][9b65e880][4e0bd4f9]
21:32
nqp/main: b9290ecbb1 | (Justin DeVuyst)++ | tools/templates/MOAR_REVISION
[release] Bump MoarVM revision to 2023.12
23:00
nqp/main: 599ccb7be9 | (Justin DeVuyst)++ | VERSION
[release] Bump VERSION to 2023.12
rakudo/release-2023.12: a969489388 | (Justin DeVuyst)++ | tools/templates/NQP_REVISION
[release] Bump NQP revision to 2023.12
rakudo/release-2023.12: ae1289817e | (Justin DeVuyst)++ | VERSION
[release] Bump VERSION to 2023.12
23:03 RakuIRCLogger left, RakuIRCLogger joined
Geth rakudo: jdv++ created pull request #5503:
Release 2023.12
23:06
rakudo/main: 4 commits pushed by (Justin DeVuyst)++ 23:07
jdv the release is done 23:23
patrickb: release happened
.tell El_Che release happened 23:24
is that how to do that here?
23:25 bisectable6 left, bisectable6 joined
tellable6 jdv, I'll pass your message to El_Che 23:25
23:26 Altai-man left
lizmat jdv++ 23:34
23:38 JRaspass left, JRaspass joined