[Coke] ⏳ 220 out of 2204 modules processed 00:00
05:50 sivoais left 06:00 sivoais joined 07:40 sena_kun joined 08:11 sena_kun left 09:12 finanalyst joined
Geth rakudo: patrickbkr++ created pull request #5638:
set-env.ps1: Fix detecting build tools with multiple installed
10:35
rakudo/main: c4c540eb63 | (Patrick Böker)++ | tools/build/binary-release/assets/Windows/scripts/set-env.ps1
set-env.ps1: Fix detecting build tools with multiple installed

Thanks @pullrich for the report and fix!
Fixes #5636
rakudo/main: fd309af894 | (Patrick Böker)++ (committed using GitHub Web editor) | tools/build/binary-release/assets/Windows/scripts/set-env.ps1
Merge pull request #5638 from patrickbkr/fix-set-env-ps1-multiple-build-tools

set-env.ps1: Fix detecting build tools with multiple installed
11:36 finanalyst left
[Coke] ⏳ 371 out of 2204 modules processed 13:35
Killed
13:36 MasterDuke joined
[Coke] Guessing I need to double the RAM again. 13:38
jdv: how much RAM/CPU do you have on the box you do blin runs on?
(resizing from 4G to 8G and will re-run). 13:42
timo there's an amount of control over the oom process (if it was actually oom-killed, which it probably was TBF) where you can decide if a whole process group gets killed or parts of it, and you can also give individual processes higher likelyhood to be chosen by the oom-killer, or if you're root or otherwise have the right capability, you can reduce a process's oom score down to make it less likely 14:09
to be chosen
would probably be a good idea to try to oom-adjust the score up in a subprocess of the blin runner so it can try to recover or at least write out its state or log or whatever 14:10
[Coke] ok. Will investigate that if it gets killed again, as I don't want to venture too high into "throw more azure resources at it" 14:16
timo do you get access to dmesg where an oom kill event would be documented? 14:17
it would also show up in the systemd journal normally
[Coke] I have sudo on the box, so presumably. 14:18
(seems to be running faster with more memory, yay.)
So it runs old and new for each module. But it definitely seems to be reporting many more (new) than (old) going through. 14:23
jdv [Coke]: how goes? 14:25
iirc 16 cores and 16g.
[Coke] ah. I'm up to 2/8 on this run 14:26
jdv ah, to be 16 again...
[Coke] jdv: back up to ⏳ 83 out of 2204 modules processed
jdv so by 2025 youll have a run? 14:27
:)
[Coke] hopefully
jdv takes my box of 16 by 16 bout 8h. 14:28
coolio
[Coke] I wonder if it's skipping some olds because they were saved from a previous run? 14:31
lizmat notable6: weekly 14:34
notable6 lizmat, 2 notes: 2024-09-01T22:00:16Z <antononcube>: youtu.be/pcGH6CeptJE ; 2024-09-02T12:22:06Z <lizmat>: github.com/FCO/EventExpressionLang...write.raku
lizmat notable6: weekly reset 14:40
notable6 lizmat, Moved existing notes to “weekly_2024-09-02T14:40:33Z”
lizmat and yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2024/09/02/2024-36-on-top/ 14:47
Geth rakudo/main: e335344c96 | (Elizabeth Mattijsen)++ | src/core.c/CompUnit/RepositoryRegistry.rakumod
Ignore PERL6LIB setting if RAKULIB is specified

This also silences any deprecation warning for PERL6LIB if RAKULIB is specified, which is nice for e.g. Blin
15:10
lizmat [Coke] ^^ 15:11
[Coke] Thanks! 15:19
ugexe while that is probably fine, in any other language that would be part of a major version change 15:22
we dont follow semver anyway so it just an observation and not a complaint 15:24
timo it's also a little odd that it's an elseif on only the RAKULIB check, unrelated to the RAKUDOLIB check 15:25
lizmat I wonder why we have RAKUDOLIB actually 15:26
ugexe users also wouldnt get the deprecation warning if they have RAKULIB set and PERL6LIB 15:27
lizmat they would before e335344c96
linkable6 (2024-09-02) github.com/rakudo/rakudo/commit/e335344c96 Ignore PERL6LIB setting if RAKULIB is specified
ugexe in my opinion it should warn 15:28
lizmat if you use both? 15:29
[Coke] RAKUDOLIB isn't standard, is why, I'm guessing.
ugexe if you use PERL6LIB, which we say is deprecated, it should warn. no other environment variables should affect the warning that would gnerate 15:30
Geth rakudo/main: dc5117f48e | (Elizabeth Mattijsen)++ | src/core.c/CompUnit/RepositoryRegistry.rakumod
Revert "Ignore PERL6LIB setting if RAKULIB is specified"

This reverts commit e335344c96291d8b403aa7a7dead3d9d17aae091.
patrickb I think the idea was that the business of setting a lib search path is not part of the raku spec. So it's a thing of the runtime. Thus RAKUDOLIB would make more sense. 15:31
[Coke] ugexe: the idea, to me, is that you've already addressed it by adding the new one. (esp. in this case where I might be running against *any* raku and am trying to minimize the amount of warnings I get). 15:32
The ideal solution of "figure out which release of raku I'm using and therefore exact where the deprecation is present so I can use exactly the right variable" is more complicated for the users (and how it was a pre-that commit) 15:33
AlexDaniel just suggested that blin just use RAKUDOLIB, which is probably also fine. 15:34
15:34 sivoais left, camelia left, jjatria left, andinus left
[Coke] I'm only thinking in the context of Blin at the moment (and wasn't expecting a rakudo change here.) 15:35
lizmat yeah, that's why I did the revert
15:35 sivoais joined, camelia joined, jjatria joined, andinus joined
[Coke] (Blin finally made it out of the As. so slow) 15:35
ugexe i agree that would be frustrating and that there might be a solution. but i think what you are doing is a more of a special case 15:38
if i were to quickly propose an alternative solution, it would be some way to disable specific deprecation warnings 15:40
like maybe you set RAKULIB to address that warning, but another user may have set it for another unrelated reason. Or they might set PERL6LIB in their test file (which is bad but besides the point) and then zef/tap6 set RAKULIB when invoking the test file itself 15:46
[Coke] yup, makes sense 15:51
16:46 finanalyst joined 18:01 sena_kun joined 18:06 finanalyst left
timo "git clone --reference-if-able" doesn't seem to work when also using --recurse-submodules, unless i have to specify every submodule's repo as referenceable? 18:36
ab5tract [ 18:43
timo nope :|
hm. can't "git submodule init" + "git submodule update --reference" without hitting the network for the original github.com? that's unfortunate 18:57
oh, "git clone --reference-if-possible" also wants to hit the network huh ... well, that makes sense 19:01
i just don't want to put too much strain on my tethering :D
20:19 lizmat_ joined 20:21 notable6__ joined 20:22 evalable6__ joined, unicodable6__ joined, bloatable6__ joined, greppable6__ joined 20:24 tonyo1 joined 20:25 notable6 left, evalable6 left, unicodable6 left, bloatable6 left, greppable6 left, tonyo left, lizmat left
MasterDuke any reason not to merge github.com/rakudo/rakudo/pull/5635 ? 20:32
timo nobody gave it a review or approval ;) 20:35
Geth rakudo/main: 81d43af66e | (Timo Paulssen)++ | tools/build/raku-ast-compiler.nqp
drastically speed up raku-ast-compiler.nqp

Terminating the LTM immediately after opening paren, bracket, or brace prevents the NFA from running all the way to the end of the file whenever it encounters an nqp-code token. This makes nqp-m finish the raku-ast-compiler in 4 seconds instead of 1600, and moar is a bit faster, too (but already plenty fast)
20:36
rakudo/main: 67be927131 | MasterDuke17++ (committed using GitHub Web editor) | tools/build/raku-ast-compiler.nqp
Merge pull request #5635 from rakudo/raku-ast-compiler-shorter-LTM

drastically speed up raku-ast-compiler.nqp
MasterDuke so much faster. rakudo jvm build was 1250s before, now it's 385s 20:44
timo++ 20:45
ab5tract timo: FWIW, all of Coke++ blin efforts are at least in part inspired by trying to test that change… 20:47
timo oops? 20:51
this change in particular should not make a difference to any module at all
[Coke] In general, getting multiple blin runs during the month pre-release will be better all around. 20:52
whether or not that particular commit is blinteresting.
at this rate I'll be luck if this run finishes in september. 20:53
⏳ 247 out of 2204 modules processed
(same run as the 83 above! )' 20:54
timo you don't happen to have the ability to `perf record -a -g -F 501 -- sleep 120` or something? 20:56
ab5tract [Coke]: just to double check.. it isn’t installing the modules against all revisions, is it? 20:59
[Coke] just 2 revisions, as far as I can tell 21:04
only more if there's a difference in output so it can bisect.
so a very small fraction of the ecosystem might have more than 2 outputs. 21:05
s/outputs/versions/
timo: I apparently can. what's that going to tell me?
I've made it up to Event::Emitter, which I think is a record for me. :) 21:07
ab5tract [Coke]: hmm.. still at least double the work that is really needed though
[Coke] timo: [ perf record: Woken up 31 times to write data ]
ab5tract: ... no, you need the baseline. 21:08
knowing that something fails isn't enough, it could have been failing alread.
y
ab5tract Why? We know the last blin ran fine
As we fix things until it does, every release
[Coke] even with the improve compilation check - it might have not compiled beforehand.
timo [Coke]: it'd record CPU and stack samples from everything on the system at just above 500 Hz and let us see what code is hot 21:09
[Coke] if I was running on the exact same machine that jdv ran it on, sure.
timo `perf report -g --stdio` will output the data suitable for pasting as a text file, `perf report` on its own gives you a little UI to browse results with
[Coke] timo: ah, so won't help blin directly, but will have some neat archaeology for core devs. :)
[ perf record: Captured and wrote 7.776 MB perf.data (59885 samples) ]
ab5tract We take it as a given that the one blin run is entirely enough for a release
[Coke] so it only ran for that 120s? 21:10
that one blin run still needs the baseline of last time. I'm not sure I understand your point, except perhaps: "Coke is wasting his time"
ab5tract So yeah, if there’s a new failure, you could regen the release to check if it’s a system specific issue
[Coke] which I'm sure is not it, but kind of feels like maybe. ;) 21:11
ab5tract No, I’m not saying that at all. You have mentioned the run time
[Coke] and I'm pretty sure the next time I run this using the same old commit, it'll go 2x as fast. 21:12
ab5tract You aren’t wasting your time anymore than I did when I wrote the patch to run against a single revision :(
[Coke] So until there's a way to share results across blinstances, this is fine, no worries. 21:13
ab5tract *:)
[Coke] let's get your patches merged, btw!
timo: how can I view that output? 21:14
ab5tract Yeah I have to untie them a bit but will take care of that
[Coke] there's a perf.data file in my curdir, googlign...
timo that's the `perf report` commands i mentioned 21:15
[Coke] ok. it's showing me assembly. Want me to throw this file somewhere you can get it? 21:16
timo: gist.github.com/coke/f77bc84934853...c258b9099d 21:17
timo right, when you hit enter on a symbol, the default thing it'll do is disassemble to put sample counts next to instructions
without MVM_JIT_PERF_MAP=1 in the environment we also won't be able to resolve anything inside jitted code, which is hopefully where the majority of execution time happens
[Coke] does that PERF_MAP env var slow things down? 21:18
If not (much) I can include that on my next run
timo it would only do a tiny amount of slowing down, but it'll create many files in /tmp - depending on where your pids wrap, several hundreds of thousands :D 21:19
all of them relatively small though
21:19 MasterDuke left
[Coke] not ideal given I'm paying for disk space, but manageable. 21:20
timo well, /tmp could be a tmpfs :P 21:26
ab5tract I was about to mention that
timo before starting a "perf record" you can delete any files in /tmp that don't have a matching PID still alive
the files all have a PID in their filename
is there an easy way to tell perl5 on the commandline or with an env var to spam at me everything it does, or everything it execs or so? 21:27
ab5tract You’ll want to ensure that tmp is actually backed by SSD if you want any to pay it as storage and not RAM 21:28
timo `env PERLDB_OPTS="NonStop=1 AutoTrace=1 frame=2" perl -dS` sounds good from man perlrun
ab5tract but I guess 21:29
You might already be using Azure’s version of S3 to some capacity, since you anticipate reusing revision results
[Coke]: I want to be clear that I really appreciate the work you are putting into “blin-as-a-button” 21:33
21:33 sena_kun left
lizmat_ [Coke] ab5tract I wonder whether a special Blin mode would first test only the modules that were marked as "passing" in the past 21:49
21:49 lizmat_ left, lizmat joined
lizmat to spot any new breakage first 21:50
Geth nqp/main: 7d382436d3 | (Timo Paulssen)++ | 3rdparty/nqp-configure
Bring nqp-configure up to date (same as rakudo)
22:06
timo in my current docker experiments i'm seeing some strangeness where apparently "make install" run as root will try to repeatedly re-initialize the nqp-configure submodule because "git config nqp.initialized" exits with failure because the git repo is owned by a different user 22:16
[Coke] (tmpfs) I am also paying for memory. :) 22:17
lizmat: given how long it takes, any fail fast would be an improvement for sure. 22:18
($$) I would probably be better off getting the next level up with more cores and then shutting it down when not in use.
timo .o( we could collect coverage information from blin runs to identify when changes to some line in core, raku, or nqp have an impact on what exact dists ) 22:19
[Coke] (doing it as a button is going to be challenging if I want to be able to spin down the VM when not in use)
timo do people not "sudo make install", or did nobody find "fatal: not in a git repository" unremarkable, or did nobody besides me get this error? 22:21
alternatively, does a newer version of git make it not-a-problem?
[Coke] timo: for my raku usage, I almost always build and install as me. 22:22
timo so do i 22:23
[Coke]: so, 501 isn't a very high frequency to collect samples at, when i work on $stuff i normally have it cranked up to like 15_000, but you've got a very long-running process there. i'd like to get a recording from much longer, so put a higher "sleep" number 22:32
it's probably safe to say the file size roughly increases linearly with the sleep time increase?
actually, perf record takes a -z parameter for compression, and if you crank it up to like -z 14 or so it's probably going to be noticeably smaller. with only ~500 events per second there's also no worry that it'd slow down recording too much or so 22:38
also, perf has wonderful flags --no-no-buildid and --no-no-buildid-cache :D