🦋 Welcome to the IRC channel of the core developers of the Raku Programming Language (raku.org #rakulang). This channel is logged for the purpose of history keeping about its development | evalbot usage: 'm: say 3;' or /msg camelia m: ... | log inspection situation still under development | For MoarVM see #moarvm
Set by lizmat on 22 May 2021.
lizmat Files=1353, Tests=117192, 308 wallclock secs (35.81 usr 9.58 sys + 4087.00 cusr 339.79 csys = 4472.18 CPU) 07:43
Geth rakudo/lizmat-constantize-compiler-info: 257e06ab5f | (Elizabeth Mattijsen)++ | tools/cleanup-precomps.raku
Remove cleanup-precomps script
08:15
patrickb jdv: My reasoning was that if rakudo.org was compromised (someone modified the downloads) they would be able to also modify the keys also hosted there (and unless they are stupid would surely do so). 08:18
jdv: So my proposal is that each uploader lists their key fingerprint on at least one other site. Since every raku dev has a GitHub account I thought the GH profile is a place everyone is able to list their fingerprint (like I did here: github.com/patrickbkr ) 08:20
jdv: I do recognize that only having keys on rakudo.org still provides some level of protection because users that already verified a download in the past would already have the keys downloaded. A malicously changed key would need to be re-downloaded and thus can cause suspicion. 08:23
jdv: But all this is only my personal reasoning and I don't think I've seen this policy (listing fingerprints on third party sites) implemented on any other site. That's why I'd like to have feedback on whether we should proceed with this approach and if not how else we should do it. 08:25
nine patrickb: I think there's a severe bug in your script github.com/rakudo/rakudo/issues/4899 08:29
patrickb jdv: The PR requires all recent releasers to cooperate (put key fingerprint on GH), so it's necessary that all recent releasers (I think that's jdv, AntonOks, sena-kun and me) need to read, understand and hopefully approve the PR. 08:30
nine patrickb: it's encoding music to mp3 instead of a superior codec like vorbis!
patrickb nine: :-P Tell that my car radio
nine Even in 2022? That's horrible
But then, I guess a car is not a HiFi environment anyway 08:31
patrickb The reason I have to have that script is because the biggest part of my collection IS encoded in opus or vorbis.
WRT to more bugs: I am glad to be corrected and learn. 08:33
nine patrickb: after what time can one expect that reduction in CPU usage? 08:53
patrickb In that specific case the first core was droped after about 1:10 08:58
In the screenshots one can see quite nicely when the cores are lost. The full width of the graph is 5 minutes. 08:59
nine I've tried reproducing it with a reduced test case, but so far failed: rakudo -e 'my @input = ^1000; race for @input { my $proc = shell "dd if=/dev/urandom bs=4096 count=102400 | gzip > /dev/null", :err; $proc.err.slurp(:close) }' 09:11
patrickb tries 09:17
nine: I think I found the sweet spot on my system. 09:36
rakudo -e 'my @input = ^100000; race for @input { my $c = (($_ % 100 != 0 ?? 500 !! 0) + 100.rand).floor; my $proc = shell "dd if=/dev/urandom bs=4096 count=$c | gzip > /dev/null; echo done $_", :err; $proc.err.slurp(:close) }'
nine Ah, yes, utilization goes down pretty rapidly 09:38
patrickb I suspect the scheduler tries to set the no of cores to use on an estimation of no. of elements in the list and average time the last few items took. If some fast items happen to be next to each other, it reduces the number of cores.
Also interesting: If one reduces the 500 to say 100, then the program doesn't start off with 32 cores but a lot fewer (~ 5 in my case). 09:39
nine I see the same issue with rakudo -e 'my @input = ^100000; @input.race(:degree(32), :batch(1)).map: { my $c = (($_ % 100 != 0 ?? 500 !! 0) + 100.rand).floor; my $proc = shell "dd if=/dev/urandom bs=4096 count=$c | gzip > /dev/null; echo done $_", :err; $proc.err.slurp(:close) }' 09:40
patrickb But that might actually make sense, as the time required for management could weight more than the time to do the actual work. 09:41
Interesting. So it's not only the `race for` as I originally suspected.
nine for actually compiles to map, so I'd have been surprised anyway 09:43
Btw. I don't think the scheduler ever reduces the number of worker threads. 09:51
Geth IO-Path-ChildSecure/main: 9f01c370ba | (Elizabeth Mattijsen)++ | 9 files
1.2
10:00
Geth WebService-GitHub/main: ba4e94c97a | (Patrick Böker)++ | 2 files
Add a start of a documentation
13:33
WebService-GitHub/main: a94d71ccf9 | (Patrick Böker)++ | dev-scripts/generate-module
Version 0.2.1
13:34
jdv lizmat: are you done with your changelog edits? 14:08
lizmat actually, I was just about to look at them again :-)
got distracted a bit :-)
jdv: give me 30 mins or so? 14:09
jdv ok, i'll start the release in 50mins, at the top of the hour i guess 14:10
lizmat ok, that should be enough :-)
jdv thanks
lizmat jdv: done with the changelog draft 14:34
Geth rakudo/lizmat-fix-unneeded-dependency-check: 58f6bc961d | (Elizabeth Mattijsen)++ | src/core.c/CompUnit/PrecompilationRepository.pm6
Fix unneeded dependency check

Commit 199888abedfe843996 in March 2020 borked the setting of the repo-id. This caused unneeded dependency checking for installed modules, and a slowdown of e.g. loading NativeCall of about 80 msecs.
Now, the slowdown will only occur the first time after installation, when the repo-id is now updated correctly so that a subsequent loading will not have to do dependency checks.
Spotted by nine++ after looking into github.com/rakudo/rakudo/issues/4900
15:04
rakudo: lizmat++ created pull request #4901:
Fix unneeded dependency check
jdv lizmat: ok, i guess you didn't update the ignore and delete - i'll do it 15:09
lizmat ah... sorry, I thought you wanted to do that
jdv there's nothing i want to do more 15:11
ha
jdv anyway... aws seems to not be albe to start the instance i use for this stuff... 15:58
of all things:)
"insufficient capacity" for the last hour so far. good times. 15:59
jdv well, this is swell. i guess we'll aim for sunday or monday then. sorry folks. 17:37
[Coke] no worries, thanks for your effort! 17:47
anyone have pointers on how to run blin in a docker container against everything? (and what sort of specs I need on the machine hosting the container) 18:05
[Coke] wonders why AGPL was chosen. 18:07
(for Blin)
jdv it largely just works. i run the docker way. 18:17
there may be a few unfixed minor issues. been meaning to look and fix if so.
i normally run it on a c6i12xlarge or so 18:18
iirc theres 1 dep issue and i added november and test async to the skip list cause tgey always fail in random ways 18:19
[Coke]: ^ 18:20
iirc the rakudo release guide has some blin resource ideas 18:21
Geth rakudo: dwarring++ created pull request #4902:
fix orphaned trailing Pod declarands
18:30
Geth ¦ nqp: coke self-assigned t/nqp/19-file-ops.t failure in pre-2016.01 github.com/Raku/nqp/issues/274 21:37