00:42 melezhik_ joined 00:44 melezhik_ left 02:45 melezhik_ joined 02:47 melezhik_ left 04:34 kjp joined 04:36 kjp left, kjp joined 06:42 melezhik joined 08:21 librasteve_ joined
ab5tract melezhik: this morning I'm imaging using local sparky to manage and record make spectest results 09:31
09:50 melezhik_ joined
melezhik_ . 09:50
ab5tract: cool! let me know if you need anything ... 09:51
09:55 melezhik_ left
melezhik ab5tract: meanwhile I have managed to fix some minor optimization issues in brownie, so when you are ready to spin up your agent - please let me know, you don't need to update anything, just restart your agent(s) 09:56
11:14 ds7832 joined 12:12 melezhik left 12:41 melezhik_ joined
melezhik_ [Coke]: [ab5tract]: I would like to run brownie demo on 200 raku distributions , [Coke]: what do you think? would like to join and share some agents ? 12:42
ab5tract yeah one sec
12:46 melezhik joined
melezhik Sure 12:46
12:47 melezhik_ left
ab5tract alright, 3 hosts with reasonable resources 12:47
s/hosts/agent containers/ 12:48
melezhik Cool. Please run agents . And I run orchestrator 12:49
Oh, I see you already run . Cool 12:50
ab5tract just added a fourth for good measure :) 12:52
melezhik Excellent, this gonna crush it 12:53
12:53 melezhik2 joined
melezhik2 brw.sparrowhub.io/report/brw-orch/46 12:53
realtime link
ab5tract will the orchestrator automatically pick up this fourth agent? 12:55
I'm still not sure why my agent name prefix isn't working
melezhik2 good question. The current algorithm is bit too hasty , so it niamjally spreads all 100 distributions, 10 distro a pack within few minutes to all available agnates, so that means if some agents show up late - they won;t get any jobs 12:56
ab5tract i saw in the logs 3 agents, so yeah I guess #4 is idle 12:57
it would be really cool to see the queue of module installs as they come through
12:58 melezhik83 joined, melezhik83 left, melezhik_ joined
ab5tract the agent would have to publish that back to sparky, I guess 12:58
melezhik_ yeah, I think I will improve frontend UI in a future, right now the page I pointed has real time info, f.g. "10 tests out of 100 finished" 13:00
and It's refreshed via web socket
ab5tract www.youtube.com/watch?v=w0HXckC0OCU
13:00 melezhik2 left
ab5tract SSE might be a perfect fit for this 13:01
but if there is already a websocket set up, maybe it's not usefull
melezhik Yeah. I just need to make it more user friendly 13:02
ab5tract still,
melezhik If you look at brw.sparrowhub.io/builds
ab5tract I think SSE maps really well to supplies
melezhik Every job.result build is result reported by some agent on finished tests 13:03
13:04 melezhik_ left
melezhik I am not against SSE , it’s sparky was designed to be general purposes framework , not tied to specific use case , so we can for example build some nice SSE based frontend behind it, as we have all realtime data anyways 13:05
ab5tract sounds good to me 13:06
13:06 melezhik2 joined
melezhik2 it's already 66 tests out of 100 finished 13:07
after 15 minutes of work
not bad )
and we only use 3 agents 13:08
also if you click on any job.report build , like this - brw.sparrowhub.io/report/agent.report/146 13:09
ab5tract I'd like to run it again, because I count 5 agents in the UI 13:10
13:10 melezhik_ joined
ab5tract I think I need a login to see the agent.report 13:10
13:10 melezhik_ is now known as melezhik_3
ab5tract whoops, spoke too soon 13:11
melezhik_3 hm, you don;t need to login to see jobs reports
13:12 melezhik_ joined
melezhik_ also in artifacts of job.report pages if there is any Foo.log, like WWW::CloudHosting::Hetzner.log , that means some error occurred during zef install of this module 13:12
and you can get those details by viewing WWW::CloudHosting::Hetzner.log
view link actually
88 tests out of 100 finished 13:13
13:13 melezhik2 left
ab5tract my current resource usage is quite light 13:15
13:16 melezhik_3 left 13:17 melezhik_ left
melezhik It’s finished 13:18
13:18 melezhik_ joined, melezhik_ is now known as melezhik___1
melezhik___1 brw.sparrowhub.io/report/brw-orch/46 13:18
if you click on system tab 13:19
you see the time
melezhik Started at 12:52 and finished at 13:17 13:20
13:20 melezhik_ joined
melezhik So 25 minutes for 100 Raku distros 13:20
Not bad at all 13:21
I wonder what we get if we run in 5 agents ? 4 yours and one is mine ?
Can we repeat ?
brw.sparrowhub.io/file_view/brw-orc...ummary.txt 13:22
13:23 melezhik_ left, melezhik___1 left
melezhik I rebuild orchestrator so the links should be gone , but that’s fine as I managed to get timing )) 13:30
I will be ready in 30 minutes, please hold on with running agents , I will let you know 13:31
13:43 ds7832 left
Geth rakudo/main: 7117060347 | (Elizabeth Mattijsen)++ | src/Raku/Grammar.nqp
RakuAST: fix =table checks for numbered tables

The problem was that the $*IN_TABLE check was done on the whole type, instead of the root of the type. So =begin table3 would check for "table" and see "table3" and not increment $*IN_TABLE.
Now the root of doc-identifier is also exposed, and used to check whether or not we're in a table.
Fixes #5994
14:00
14:08 melezhik_ joined
melezhik_ ab5tract: I am ready 14:09
ab5tract do I need to re-run the agents?
melezhik_ yeah, just stop/start to clean cache
melezhik And if you use -e BRW_AGENT_NAME=name 14:11
This should set name
For agent
docker run --rm -it --name agent -p 4000:4000 -e BRW_AGENT_NAME_PREFIX=cool-boy agent 14:12
Or pretty much the same I guess for podman
Ok, I see the first coming 14:13
14:14 melezhik_ left
ab5tract should be 4 online now 14:14
melezhik Yep . Checking
I also started another agent 14:16
Let’s wait till all of them are registered in the system
And then I will run orchestrator
brw.sparrowhub.io/report/brw-orch/35 14:19
Bummer , orchestrator refuses to see more then 3 agents , should be a bug , need to look into it closely 14:31
Ok, finished now . It’s exactly 30 minutes on 3 agents . I am going to patch orchestrator to address limited agents number issue 14:51
And if / when you are ready we can run again
I am ready . ^^ ab5tract 15:05
[Coke] raku.land just spinning 17:35
why do we have raku.land/zef:librasteve/Physics::Unit and raku.land/zef:p6steve/Physics::Unit ? 17:36
I assume these are both the same steve.
and basically the same module, but with newer ones under librasteve 17:39
Old ones a candidate to move to REA only? 17:40
(REAdonly)
lizmat the problem is that fez doesn't allow deletions of packages after the initial 24h timeframe 17:41
and raku.land just presents what it gets from 360.zef.pm 17:42
[Coke] Sounds like a use case for fez that needs to be considered. 17:49
What if we find a module that's 3 days old and has malware in it?
lizmat we make sure a higher version is uploaded without it 17:50
18:02 finanalyst left 18:18 melezhik_ joined
melezhik_ my 2 cents here - if malware crept in, maybe still worth to delete module permanently to avoid accidental installation ( though not likely if newer version exists ) ? 18:21
[Coke] Yes - we shouldn't leave a known bad verison out there. 18:27
if it's a simple vulnerability or bug, that can be fixed with a new version, sure. 18:28
18:30 melezhik_ left 18:33 kjp_ joined 18:37 kjp left, rakkable left, camelia left 19:13 rakkable joined 19:28 camelia joined
librasteve_ rakudoweekly.blog/2025/10/27/2025-...-or-treat/ 19:39
notable6: weekly reset 19:41
notable6 librasteve_, Moved existing notes to “weekly_2025-10-27T19:41:15Z”
21:22 melezhik left 21:50 librasteve_ left 21:53 librasteve_ joined 23:48 guifa joined