02:02 guifa joined 04:03 guifa left
ab5tract [Coke]: I feel like adding an sqlite database that gets a row per tested module would be helpful. Or it could add only failures. Either way, restarting a blin job would be trivial if this were in place, right? 07:21
just spitballing ideas for the next version
melezhik: these changes to whateverable are probably necessary too github.com/Raku/whateverable/compa...rable:main 07:24
tellable6 ab5tract, I'll pass your message to melezhik
07:45 melezhik joined
melezhik [Coke]: I am going to make a proposal today how I see further progress evolution of blin . In two words I would replace blin by sparrow and run tests in distributed cluster , this would give us a simplicity and scalability . I think I am ready to share a concept , I may create a ticket in blin2 repository for discussion . 07:50
tellable6 2025-10-22T07:24:06Z #raku-dev <ab5tract> melezhik: these changes to whateverable are probably necessary too github.com/Raku/whateverable/compa...rable:main
07:50 lizmat left
melezhik ugexe: I get that , the uniformity of tested environment is achievable if we deliver test boxes as docker containers , I will reflect it in my proposal . 07:51
lizmat: your eco-system-cache could be also useful if we pack it inside docker containers which would run actually tests 07:53
tellable6 melezhik, I'll pass your message to lizmat
melezhik ab5tract: thanks for that. Again on my taste blin is great when run on single host mode but abit over engineered if to use it in distributed environment, I would like to have simplicity and scalability, I will reflect it in my proposal 07:56
08:26 melezhik_ joined
melezhik_ [Coke]: github.com/coke/blin2/issues/1 , I wonder if we can give access to all who is interested , so we can discuss or I van just put it into sort of public GitHub gist 08:27
anyways - gist.github.com/melezhik/1fee7ae37...740aab3fd5 cc [Coke]: ugexe: lizmat: ab5tract: 08:38
solubility -> scalability )) 08:42
08:45 melezhik_ left 11:05 melezhik left 11:06 patrickb left 11:34 melezhik joined
melezhik . 11:36
11:37 librasteve_ left
[Tux] Rakudo v2025.10-22-ge1b3f9277 (v6.d) on MoarVM 2025.10-8-g36fcd3d1f
csv-ip5xs0.264 - 0.267
csv-ip5xs-201.096 - 1.143
csv-parser1.145 - 1.157
csv-test-xs-200.114 - 0.115
test1.872 - 1.913
test-t0.455 - 0.464
test-t --race0.279 - 0.285
test-t-205.820 - 5.985
test-t-20 --race1.412 - 1.429
11:42
csv-test-xs 0.014 - 0.014
tux.nl/Talks/CSV6/speed4-20.html / tux.nl/Talks/CSV6/speed4.html tux.nl/Talks/CSV6/speed.log
11:48 lizmat joined 11:56 lizmat left 12:19 patrickb joined 12:27 lizmat joined 12:28 lizmat left 12:29 lizmat_ joined, lizmat_ left 12:45 rba left 13:00 rba joined 13:15 lizmat joined 13:16 lizmat_ joined, lizmat left 13:19 lizmat_ left 13:32 librasteve_ joined 13:55 melezhik left
[Coke] Anyone wants access, let me know. 14:10
14:12 melezhik_ joined 14:48 melezhik_ left, melezhik_ joined 15:02 finanalyst joined 15:41 melezhik_ left 15:53 finanalyst left 15:56 melezhik_ joined
ugexe i think if you wanted to do a more cpan testers type thing that we'd be able to group the results in some way, potentially by environment / container / author etc. That could allow one to view only the results of blin runs using a specific container with a known environment (a blin view) but also the results of installs on arbitrary environments (a cpan testers matrix like view) 15:57
for example it can be useful to see the test failures of some relatively unused os/arch, but those failures aren't important in the context of what blin wishes to achieve (and indeed since it is an unknown environment it could very well be user error) 15:59
16:01 melezhik_ left 16:02 melezhik_ joined
[Coke] And right now we're only testing one arch. Even adding a common second one like windows would be amazing. 16:07
16:18 timo2 is now known as timo 16:24 melezhik_ left, melezhik_ joined 16:27 melezhik_ left 16:28 melezhik_ joined
melezhik_ . 16:29
ok. looks like I chose cpan testers way and it became misleading , sorry for that, what I really meant, main idea proposed by me is to instead of run all tests on single host using blin, I would rather see it as running tests on multiple hosts , in distributed way as cpan testers do, but not because we want to test on many envs and collect stat. no, the agent docker image will be the same 16:31
across all agents, this will ensure uniformity 16:32
but yes, generally speaking we can have eventually many groups of agents for different archs, if we really need it
ugexe that seems more along the lines of users providing agents to the pool 16:33
melezhik_ however the distribute nature of tests is the key, as it scale and managed much better
ugexe: exactly
like azure pipeline agents, or gitlab workers if you are familiar with that
the same PULL pattern 16:34
with job orchestrator blend a central node, but it's CPU/memory pressure is minimal as it does not actually tests, only orchestrate them across cluster . get results and create all sort of reports 16:35
16:37 melezhik_ left, melezhik_ joined
melezhik_ also say we test against pre release rakudo version, SHA whatever able, and we have 500 modules or even whole eco system, from the start agents have 0 modules installed , but with time the more zef install Foo happend on them the bigger "cache" they have 16:37
this will eventually makes time zef install Foo for other modules smaller and smaller 16:38
I can it eventual performance increase 16:39
we don't even need to think about DAG depth first traversal, installing modules without dependencies first as blin does
the strategy becomes much simpler
say we have 10 agents, and we have 500 modules or so, we can just install 5 modules at one shot on every agents, and I guess within about 30 minutes we may end every agent have a decent installed modules "cache" 16:42
and If we are lucky enough and randomly shake list of modules and decencies tree for all 500 modules is well balanced we have decent chance to have a good time of testing all 500 modules 16:44
this is of course only a speculation, but I would like to check it by creating within few days a simple 2-3 agents 1 orchestrator demo and test against 50-100 modules to check this theory 16:45
I can it -> I call it 16:46
with job orchestrator blend -> with job orchestrator being
decencies tree -> dependency tree 16:48
sorry for typos
another idea derived from this architecture is incremental testing, when we can always stop testing ( due to agents become offline or whatever reasons ) and then after while resume testing 16:51
as agents have state in a sense of installed modules, we won't start from the scratch , also orchestrator will keep track of already checked modules and won't ask agents to test them again 16:52
it's quite convenient
of course cache needs to be build again for every band new rakudo SHA version, but for old versions it will be kept forever on agents ( via installed modules ) and on orchestrator ( via records of checked modules ) 16:53
[Coke] Can probably also get rid of old caches - if your cache is for a tagged release from 2 months ago, probably done. 16:54
melezhik_ cache could be cleared if required ( all agents containers should just restart )
[Coke] if it's from a sha1 commit even a week ago, it's probably done
one nice thing about managing outside of a restart is that if you've already started on the latest round. 16:55
timo it would be good if we could make installed modules from the cache "not show up" when they are not actually in the (transitive) dependencies of the module we actually want to test
[Coke] (but yah, that isn't a hard requirement.)
I think blin does this by downloading the module but installing it as needed into a separate location each time, then wiping when done. 16:56
melezhik_ yep, or to make cache persistent through container restarts, need to use docker volumes to mount file system for zef install into container
cache/cachless could be configurable as well, depending on how we need it, but main idea containers allow do it via volume mounts 16:58
timo we could also turn building the dependencies into a "docker build" / buildah like thing where we start with the base container image, adding JSON::Fast is the first step, then adding the next dependency and so on
if we sort things properly, we can then re-use these intermediate layers 16:59
but that's DAG traversal again
melezhik_ timo: my view - agent docker image is only for agent usage, it won't have anything related modules being tested 17:00
and we test modules under completely different rakudo coming from whatever able archive 17:01
17:07 melezhik_ left, melezhik_ joined 17:13 melezhik_ left 17:14 melezhik_ joined 17:21 melezhik_ left, melezhik_ joined 17:24 melezhik_ left 17:39 melezhik_ joined 17:48 melezhik_ left 17:51 melezhik_ joined 17:59 melezhik_ left, melezhik_ joined 18:15 melezhik_ left 18:16 melezhik_ joined 18:21 melezhik_ left, melezhik_ joined 18:26 melezhik_ left 18:27 melezhik_ joined 18:32 melezhik_ left, melezhik_ joined 18:38 melezhik_ left, melezhik_ joined 18:45 melezhik_ left, melezhik_ joined 18:52 melezhik_ left, melezhik_ joined 18:57 melezhik_ left, melezhik_ joined 19:02 melezhik_ left
librasteve_ cwiggins: good luck … I did something like this for a clean AWS Ubuntu machine … the code is here github.com/librasteve/raku-CLI-AWS...kumod#L314 19:28
tellable6 librasteve_, I'll pass your message to cwiggins123
librasteve_ cwiggins: to get that to work I made a template perl script and then (since this is a module) copy that over on the zef module install (see the Build.rakumod for how to do that if you want to make a raku module) 19:29
tellable6 librasteve_, I'll pass your message to cwiggins123
librasteve_ cwiggins: if I did it again, I would likely look at the Sparky (or is it Sparrowdo) modules since that has the ssh boostrap done in a more “standardized” way 19:31
tellable6 librasteve_, I'll pass your message to cwiggins123
22:25 Voldenet left 22:27 Voldenet joined 23:41 lizmat joined 23:44 lizmat left