06:01 camelia joined 06:15 melezhik joined
melezhik . 06:15
08:22 melezhik left
ab5tract these final tickets for v6.e are a real doozy 09:10
m: use v6.*; subset E of Int; my E $e = 11; dd $e.VAR.VAR.default 09:11
camelia Any
ab5tract m: use v6.*; subset E of Int; my E $e = 11; dd $e.VAR..default
camelia ===SORRY!=== Error while compiling <tmp>
Undeclared routine:
default used at line 1
ab5tract m: use v6.*; subset E of Int; my E $e = 11; dd $e.VAR.default 09:12
camelia Int
ab5tract this one works if I disable setting the language revision in SubsetHOW.new_type
but the only language revision specific code in SubsetHOW itself is clearly unrelated 09:15
I removed it anyway and it doesn't change the result
I guess the language revision "has" to be in one of SubsetHOW's parent classes 09:18
09:23 melezhik joined
ab5tract Blech 🤮 10:56
melezhik: My agent should be up and running. I called it `agent-code-named-ab5tract` 13:11
melezhik Hey , let me check 13:12
Did you run agent job ? 13:13
13:15 melezhik_ joined
melezhik_ . 13:16
so far I only see pings from m2 agents - brw.sparrowhub.io/builds 13:17
agent-HG2pgG1Nu6 and agent-jd8viIozQK
^^ ab5tract: 13:18
ab5tract hmm
well, I'm technically using podman instead of docker, but the commands ran flawlessly 13:19
melezhik_ yep, then you need to go to 127.0.0.1:4000 and run agent job to add agent to pool 13:20
just running docker/podman container is not enough
github.com/melezhik/brownie/tree/m...nt-to-pool 13:21
pasteboard.co/zUtjKBCvQcBG.jpg 13:23
ab5tract melezhik_: I did run the agent job 13:24
melezhik_ ok, what do you see on 127.0.0.1:4000/builds ? 13:25
ab5tract looks like I'm agent-jd8viIozQK
not sure why it didn't pick up the environment variable. maybe I did a typo 13:26
melezhik_ yep, I see it in the list, cool
13:26 melezhik_ left
ab5tract oh, duh. it's an environment variable in my environment, not in the container 13:27
13:28 melezhik_ joined
melezhik_ yeah, you can see the progress on brw.sparrowhub.io/builds 13:28
also your web UI for agent will show running agents jobs on recent builds page 13:29
right now modules are being installed on 3 agents
2 agents are on the same machine though , so practically this 2 agents setup , yours and 2 mine 13:30
you can visit orchestrator running job brw.sparrowhub.io/report/brw-orch/7 to see things (from all agents ) in realtime - brw.sparrowhub.io/report/brw-orch/7 13:31
there 100 Raku modules for `zef install` test - github.com/melezhik/brownie/blob/m...onfig.raku 13:32
just a random list so far
we have `[54] tests out of [100] left` right after 20 minutes of work which is not bad 13:33
how many jobs do you see on your agent queue right now? if you to queue page on your agent UI ? 13:34
mine agents queues are agent.sparrowhub.io/queue and agent2.sparrowhub.io/queue 13:36
"duh. it's an environment variable in my environment, not in the container", yep, you need to docker run .... -e -e BRW_AGENT_NAME_PREFIX=bla-bla-bla 13:39
I wonder how many resource your box have? can I see uptime info ( shown on agent UI main page ) ... 13:41
[13] tests out of [100] left, it slows down in the end, but I think I know how I can optimize it in the next runs ... 13:52
40 minutes left 13:53
left -> last
13:56 melezhik_ left 14:01 melezhik_ joined 14:06 melezhik_ left
melezhik Ok, it’s finished now .brw.sparrowhub.io/file_view/brw-orc...ummary.txt 14:15
It took 51 minute
Faster than it was . But like said there is a room for improvement and I have some idea how to lessen time. I think current algorithm is a bit sub optimal as it sends “duplicate” jobs and agents waste time installing the same module more then one time, it’s easy fixable though … 14:17
a5tract: once I have fixed the algorithm I may ask you to stop / start agent one more time, thanks again , I see that even adding one agent sufficiently lessen overall time even for sub optimal naive algorithm, so we are good 👍 14:23
ab5tract 🥳 14:42
the box is a ryzen 5950, IIRC 14:43
16:02 melezhik_ joined 16:05 melezhik_ left 16:19 melezhik_ joined 16:28 melezhik_ left, melezhik_ joined
melezhik_ ab5tract: could you pleas stop / start your container and run agent job ? 16:32
please make it sure you drop dropped container files by fully stopping container ( restart command will keep files afik for docker )
16:33 melezhik left
melezhik_ thanks 16:33
once you start container I will run new round of tests 16:34
16:44 melezhik joined 16:46 melezhik_ left 17:52 melezhik_ joined, melezhik_ left 18:47 finanalyst joined 19:33 melezhik left 19:55 Geth left 20:45 librasteve_ joined 20:58 finanalyst left
ab5tract melezhik: I kicked the container. 21:31
tellable6 ab5tract, I'll pass your message to melezhik_
ab5tract melezhik: also wanted to suggest that you declare `config` as a term. that way you can use the associative access directly, no need for parentheses 21:39
tellable6 ab5tract, I'll pass your message to melezhik_
timo you're free to kick many kinds of containers, but please refrain from kicking the bucket
kick the mug maybe, there's precedent for that in this community 21:40
ab5tract yeah, I'll stick to mugs for now lol
m: sub term:<config> { %( <path interval> X=> 1 ) }; dd config<path> 21:41
camelia 1
ab5tract m: sub term:<config> { %( <path interval> X=> 1 ) }; dd config<wow> = 3 21:42
camelia 3
ab5tract m: sub term:<config> { %( <path interval> X=> 1 ) }; dd config<wow> = 3; dd config<wow>
camelia 3
element of %{'wow'} = Any
ab5tract ok phew 21:43
23:11 Geth joined