00:19
japhb left
00:20
ds7823 left
00:35
japhb joined
00:37
MasterDuke joined
00:57
japhb left
01:23
japhb joined
01:36
japhb left
01:41
japhb joined
01:42
MasterDuke left
01:47
japhb left
02:12
japhb joined
06:14
timo2 left
06:17
timo2 joined
08:30
melezhik joined
11:09
melezhik left
|
|||
lizmat | m: sub a(--> Int:D) { }; dd a | 15:10 | |
camelia | Nil = Nil | ||
[Coke] | Still no response to the bug on META6 that is blocking installation on Windows since February | 15:39 | |
While there is a workaround that we can do in the module itself to avoid the problematic code, the issue is something to do with Junctions. | 15:40 | ||
Perhaps we can golf that down and get it fixed in the next rakudo release? :| | |||
My simple golf apparently too simple as it works on mac and windows. | 15:44 | ||
timo2 | oh, something involving junctions that behaves different on linux vs mac and windows? | 16:01 | |
[Coke] | github.com/jonathanstowe/META6/issues/31 | 16:04 | |
if we iterate over the list, it works. if we do all(...list) ... it works on non-windows | 16:05 | ||
trying to golf it into a oneliner that still fails on windows, but no luck so far. | 16:06 | ||
but SO many things depend on META6, not being able to install it on windows... | 16:07 | ||
timo2 | have we tried turning spesh on and off for this? | ||
[Coke] | I can try that now. | 16:08 | |
$env:MVM_DISABLE_SPESH=1 - test still fails | 16:10 | ||
timo2 | because ~~ is type-checky | ||
it's SPESH_DISABLE | |||
MVM_SPESH_DISABLE | |||
[Coke] | ... oops! | 16:11 | |
ok, still fails | |||
timo2 | ah, it's broken on windows, not on linux, i misunderstood the earlier message i think? | 16:12 | |
otherwise i would have tried it locally | 16:13 | ||
[Coke] | correct, works everywhere but windows | ||
and it's been like this since at least Feb | |||
(but I haven't checked if that's "when I noticed", "when the test was changed" or "when the current rakudo stopped working" | 16:14 | ||
timo2 | i'm not sure if you want to take the time to dive in with moar-remote or so, but an initial idea of what the code is doing can be gained by setting MVM_COVERAGE_LOG=this_will_become_huge.txt MVM_COVERAGE_CONTROL=1 and then putting `use nqp; nqp::coveragecontrol(1)` before the problematic call | ||
[Coke] | That's probably beyond my debugging powers at this point. If someone needs that file generated, I can do and share the huge.txt | 16:15 | |
timo2 | it won't become quite as huge with the nqp::coveragecontrol as opposed to setting the env var to 2 which would include the entirety of rakudo's compilation stage and such | 16:18 | |
jdv | is "squashing" assumed nowadays. i know i've been out of it for a bit but that wasn't really a normal thing back in my day:) | 16:22 | |
merge means merge, to me | |||
lizmat | if it's a single commit PR (or one that could well be considered a single commit) I use squash nowadays, because it doesn't add another entry to the log | 16:23 | |
this is from the days I was going through git logs for the weekly, as a merge of a PR several weeks old, was hard to trace | 16:24 | ||
jdv | arguably in that case a PR may not be even be useful. just rebase and push. | 16:27 | |
but ok:) | |||
lizmat | well, this is mostly seen from the Github online interface | 16:28 | |
afaik you cannot just rebase and push a PR there | |||
Geth | rakudo/main: 10 commits pushed by (Will Coleda)++ review: github.com/rakudo/rakudo/compare/4...9e284f2fd4 |
16:29 | |
[Coke] | jdv: I very often squash commits, e.g. for raku/docs | 16:30 | |
I figured it was one commit, no worries, except there were worries. :) | 16:31 | ||
btw, jdv, thanks for fixing the inline-perl5 thing a while back! | 16:32 | ||
was nice to see those tests run | |||
Geth | ¦ rakudo: coke self-assigned ake release step 'rakudo-stress-errata' doing too much github.com/rakudo/rakudo/issues/5827 | 16:33 | |
jdv | oh did i? | ||
[Coke] | jdv: I'll do github.com/rakudo/rakudo/issues/5827 later this week, should be easy | ||
jdv | i vaguely remember something about that:) | ||
[Coke] | :) | ||
jdv | oh, that'd be nice | ||
[Coke] | That's the worst part for me right now is to make sure I catch the end of the 6.c run visually. :) | 16:34 | |
jdv | trying to catch the embedded failure reports on that step is kinda annoying | ||
[Coke] | (well, after fixing the other few issues this week) | ||
jdv | yup. its the one part i have to sit there for | ||
[Coke] | I also noted that somewhere, some utility is looking for "very happy to announce" as part of the message, and I DELIBERATELY REMOVE THE VERY, so hopefully I haven't broken anything doing that. | 16:35 | |
jdv | what's the word on the street with ast? | ||
EOY? | |||
[Coke] | I don't think we have a solid deadline post-meetup. | ||
ab5tract++ has been doing several updates recently | 16:36 | ||
jdv | ah, ok | ||
ab5tract | [Coke]: why thank you :) | ||
timo2 | [Coke]: i can look through the generated file if you share it | ||
ab5tract | I've still been averaging a few hours a day at hacking, but I'm stuck on the remaining issues with generics | 16:37 | |
[Coke] | jdv: once I split up that ake step into 2, I think my next big thing is to update blin. I had been thinking of a client/server approach, but japhb suggested an intermediate step of just two processes with a supply/tap to work on the jobs. I need to do this because I cannot tell you how many times since I started running blin that it just dies halfway through a run and I have to do the whole thing over. | ||
timo2: OK. no promises, will try to get that out this week | |||
I will rank that above "complete blin rewrite". :) | 16:38 | ||
jdv | that's only happened to me a few times over the few dozen runs i've done. weird. | 16:47 | |
what is the cause of the failure? | |||
i thought he mentioned a simpler fs based "job queue" | 16:48 | ||
but sure | |||
sounds cool. but afaik we still don't have a recent release of the deps of blin. it is a little overly complex | 16:49 | ||
aka Whateverable - i think that's what its called | 16:50 | ||
this why my blin docker container is "20 months ago" | |||
[Coke] | (cause of the failure) - I have no idea, because the whole run crashes, closes the tmux session. all I have is the "overview" in progress. | ||
jdv | *image | ||
for me its typically oom | 16:51 | ||
[Coke] | we can update the docker image you're using (or, rather, create a new one. I have a few under coke/ that I've been created for other raku usages...) | ||
Sure - but then there's no way to say "skip this one that caused an OOM" (or which one even caused it) | |||
and we shouldn't require like 64G to run this. | 16:52 | ||
jdv | which is why i have a 100% sized swap so i can be relatively lucky if i check in every few hours to catch it before it goes oom it gets buried in swapping | ||
i have 16G of ram and swap:)( | |||
[Coke] | Yup. | ||
jdv | its dumb | ||
idk we don't just ulimit it or whatever | 16:53 | ||
haven't loked | |||
*looked | |||
[Coke] | Well, allowing it to split into jobs across processes will eventually allow us to split it up over agents, and also collect data over multiple runs so we can do smarter skips. "oh, this failed for the last 10 releases, maybe let's just not test it going forward" | 16:54 | |
jdv | i have no problem running such an old blin. a large point of my trying to update mine was to get it to the point any rando can do it. | 16:55 | |
[Coke] | "at the very least, don't re-test the known failure from last time, only test it against HEAD" | ||
jdv | but i don't have tuits and liz gave up, so far;) | ||
[Coke] | as I recall, I couldn't even get your docker image to work for me, which is why I'm running of blin head. | ||
jdv | at one point maybe a couple years ago it was a clean docker build | ||
[Coke] | eh. liz has plenty of other stuff on her plate, no worries there | 16:56 | |
jdv | well, she does the moves to community module and she was kinda manintaining it | ||
but if anyone else wants to do that, that'd be cool | |||
iirc the issue is all the whateverable dists on the ecos are out of date | 16:57 | ||
i can continue to fix the docker build once that's corrected. i think. been a while. | |||
[Coke] | could do a build that pulled in from git repo instead of zef. | 17:02 | |
timo2 | could new-blin be built on top of sparky maybe? | 17:05 | |
[Coke] | What does that give us? | 17:07 | |
timo2 | well, i imagine sparky already has job management, a web frontend, i think it has distributed runners, too | 17:08 | |
(all speculation from my side) | 17:09 | ||
[Coke] | I mean, maybe. Seems like it would be better for the full client/server workflow, but probably overkill for the "make the runs resumable" part. | 17:11 | |
timo2 | it'll also reliably slurp up logs for us, and already have handling for resource exhaustion? | 17:12 | |
also, sparky is already being maintained and has been battle-tested, and it's always better to rely on Other People's Code™ when you yourself don't have that much time to invest, right? | 17:20 | ||
librasteve_ | rakudoweekly.blog/2025/10/20/2025-...6-2025-10/ | 17:42 | |
[Coke] | agreed, want to focus on solving the problem, not implementing a solution | 17:45 | |
ab5tract | I keep intending to learn sparky and would be willing to help efforts | 17:53 | |
[Coke] | cool, cool. | 18:34 | |
timo2 | is the CoreHackers namespace still good to use for new stuff? | 18:58 | |
[Coke] | so if we using sparky, looks like we can eventually "just" switch from having local runners to remote runners. | 19:18 | |
it does look like sparrow is not letting clients pull jobs, but is pushing them? | 19:20 | ||
timo2 | ah, i guess it's vaguely ansible-ish in that way? | 19:38 | |
[Coke] | ab5tract: what's your github id? | 19:40 | |
ah, same. | 19:41 | ||
jdv | i prefer the push pull | 19:58 | |
Geth | rakudo/coke/errata-split: 95c56c2fa9 | (Will Coleda)++ | 2 files Split -errata akefile target into 6c/6d Also reset the branch to master when done. Fixes #5827 Fixes #5829 |
22:35 | |
rakudo: coke++ created pull request #5992: Split -errata akefile target into 6c/6d |
22:36 | ||
[Coke] | jdv: there's another one for your review. | ||
splits stress-errata ake step and forces us back to master after each test. (so that when we archive, we're using the right branch) | 22:37 | ||
jdv: I'll take the next release to make sure all my changes work. | 22:40 | ||
(unless you want it!) | |||
Geth | rakudo/coke/antiflap: ffeb426021 | (Will Coleda)++ | tools/releasable/Akefile Only check flappers 2 times. Good enough, Fixes #5828 |
22:44 | |
rakudo: coke++ created pull request #5993: Only check flappers 2 times. |
22:45 | ||
[Coke] | releasable6: next | 22:51 | |
releasable6 | [Coke], Release date for Rakudo 2025.10 is listed in “Planned future releases”, but it was already released. | ||
[Coke], and I oop! Backtrace: gist.github.com/47dc7ac18253797c36...ecad4d6543 | |||
Geth | rakudo/main: 923b95fc33 | (Will Coleda)++ | docs/release_guide.pod note next release |
22:53 | |
[Coke] | releasable6: next | ||
releasable6 | [Coke], Next release in ≈25 days and ≈20 hours. There are no known blockers. Changelog for this release was not started yet | ||
[Coke], Details: gist.github.com/7e92c74df4e21d7c7a...918cacdc87 | |||
[Coke] | releasable6: next | 22:58 | |
releasable6 | [Coke], Next release in ≈25 days and ≈20 hours. There are no known blockers. 18 out of 18 commits logged |