timo | JIT isn't supported on riscv64-linux-thread-multi yet. | 00:52 | |
sadface | |||
ab5tract | Is it supported now on aarch64? | 05:20 | |
05:42
sena_kun joined
|
|||
ab5tract | MasterDuke: thank you for remind me about pmurias++ truffle work. It seems to have gotten quite a ways before he stopped | 06:07 | |
tellable6 | ab5tract, I'll pass your message to MasterDuke | ||
ab5tract | *reminding | 06:08 | |
08:51
sena_kun left
|
|||
Geth | rakudo/main: 38fa6c4821 | (Elizabeth Mattijsen)++ | 4 files RakuAST: fix various deparsing and highlighting issues This was mostly about making the use of whitespace more consistent, and more esthetically pleasing. So: =begin pod ... (12 more lines) |
10:10 | |
10:42
El_Che joined
|
|||
[Coke] | ARGH. I didn't run it in tmux this time, because when it failed before, it took out the tmux session anyway. Come back to laptop this morning to find that *iterm* had crashed on my mac. | 12:21 | |
Will bump up VM specs again before rerunning again. | |||
... huh? but somehow it automatically reconnected? | 12:22 | ||
magic. ⏳ 608 out of 2204 modules processed | |||
timo | ab5tract: we only have jit on x86_64 | 12:46 | |
ab5tract | That was my understanding but didn’t know if the move away from the expression JIT changed that fact | 12:49 | |
Regardless, I hope you can understand why I might be confused by mentions of supporting riscv64 when aarch64 is missing? :) | 12:50 | ||
timo | yeah it was a little tongue-in-cheek | 12:55 | |
actually the expression jit was supposed to be an easier way to get further architectures supported i seem to recall? | 12:56 | ||
i'd share my riscv64 and ppc64 and ppc32 images for you all to enjoy, but i'm still on a tethered connection with a monthly traffic limit :D | 13:02 | ||
[Coke] | ok, and now later it got disconnected after my laptop has been up the whole time. wtf. | 13:15 | |
lizmat | perhaps a ping -i 60 might help ? | 13:30 | |
[Coke] | resizing, will run in tmux again for next run. | 13:31 | |
I doubt the OOM will get it this time. | |||
going from 2-4 cores seems to be more than doubling the throughput here. | 13:40 | ||
I just restarted a few minutes ago, already back up to ⏳ 97 out of 2204 modules processed | 13:41 | ||
(rerunning against the same commit ID I was testing earlier, it's still testing all the "new" endpoints) | 13:42 | ||
s/endpoints/distros/ | |||
coleman: any preference for tooling to "run a bunch of commands on a fresh linux box to get it setup properly?" (will make a shell script for now to regen the VM I am doing the blin runs on) | 13:45 | ||
... er, to be able to get a Blin install on a *fresh* VM, not touching this one. :) | 13:47 | ||
Geth | rakudo/main: a8111db83e | (Elizabeth Mattijsen)++ | 2 files RakuAST: re-imagine 'use v6' and '=finish' handling in syntax highlighting. Fixes #5637 |
13:51 | |
[Coke] | ⏳ 392 out of 2204 modules processed - sooo much faster. | 14:13 | |
coleman | [Coke]: start with a raw script. I am a Packer/VM image baker guy, usually. | 14:28 | |
install dependencies, take a snapshot of the VM. then the VM is ready to go for tests | 14:29 | ||
timo | if you make a Dockerfile instead of script, you can pull the container image from somewhere and immediately jump in | 14:33 | |
do we want to investigate making blin "cumulative" / "resumable"? then we can make a "volume" with the important bits and upload that after some amount of blin is done and then fully destroy the VM instead of just turning it off; does that save any costs? | 14:34 | ||
[Coke] | eh. having a machine setup but turned off is just disk storage (which can add up but isn't too bad) | 14:35 | |
I'm up to about 5 bucks so far with all the failed runs across 3 different VMs (including all the setups, installs, builds, and blin runs) | 14:36 | ||
I think we'd have to decide if we wanted to have multiple blin clients submitting data for the same runs. (rather than making a single blin runner more resilient) | 14:38 | ||
I think the latter is pretty achievable. (and may already be partially implemented) | 14:40 | ||
Not sure we have enough blin runners ATM to consider the former. | 14:41 | ||
coleman | feel free to open an issue on Raku/infra for discussion. | 14:43 | |
we will need a small controller process to start/stop | 14:44 | ||
timo | blin makes sure to isolate things from each other, right? do you think we can find some clever techniques to save some work there? | 15:02 | |
with containers, it's cheap to snapshot and resume with COW and such, so what if we find the most common "prefixes" of flattened lists of dependencies, install them once in a container and stash that away for faster future installations? | 15:03 | ||
or is blin already clever enough about that? | |||
do we have a full-ecosystem dependency graph already available somewhere to play with? does the meta.json of REA have something like that? | 15:04 | ||
timo tries git clone --filter=blob:limit=16k --depth=10 git@github.com:raku/REA | 15:11 | ||
poor github had to spend a lot of time compressing these objects ... and now i get to spend a long time receiving them ... | 15:12 | ||
nine | The Not Invented Here is super strong here | 15:13 | |
timo | haha | 15:18 | |
what, with smoke testing? | |||
are you asking why we are not using the free open build service? | |||
nine | That question did indeed cross my mind. I mean, it's only a 12000 machine large cluster that you can use for free that handles the whole dependency graph thing for you to parallelize as much as possible. But I guess you can also try to build all of that yourself and run it on a VM somewhere that will run out of RAM :) | 15:19 | |
15:20
[Coke] left
|
|||
timo | will it be all right to use it extensively like that for "a single project"? | 15:20 | |
nine | I did ask the openSUSE guys at FOSDEM a few years back whether it'd be ok to use the OBS as CI for Rakudo and build those modules and they were ok with it. | 15:21 | |
timo | ok cool, so let's look further into that then | 15:23 | |
do we have to learn RPM specs? :D | |||
El_Che | nine: the opensuse infra has the worst UI I have seen. Ever | ||
nine | El_Che: so? | 15:24 | |
El_Che | (that said, most devops/infra UIs are crap, but the suse was was I next level weird) | ||
15:24
[Coke] joined
|
|||
El_Che | that's why people use VMs and container elsewhere | 15:24 | |
that pain is less painful | |||
it's sad, because it's a very generous offer from Suse | |||
nine | Most of the time I don't even use the web UI and use the osc command line client instead | 15:25 | |
El_Che | yeah, it's a steep learning curve | 15:26 | |
[Coke] | I certainly would rather use someone else's infra to run the tests. I'm guessing "blin as is" is not the right shape for that. | ||
El_Che | [Coke]: indeed | ||
[Coke] | so my short term is "being able to run blin multiple times a month to reduce last minute surprises for release" | ||
timo | El_Che: can it really be worse than github + azure pipelines? | ||
El_Che | timo: github pipelines not an option? | 15:27 | |
[Coke] | I think there are a lot of dials to consider if we re-architect this, including: "Do we really need to do the whole ecosystem or do we need to curate the modules we test?" | ||
El_Che | [Coke]: yes, drawings lines will save a lot of work | 15:28 | |
timo | well, ideally we do the whole ecosystem, i thought that's more than half the point? | ||
El_Che | timo: I mean github actions aren't that bad | ||
but again, need to invest enough time in it :/ | 15:29 | ||
nine | I don't see what's so bad about the OBS UI in the first place, so I for sure don't see how it could be any worse than having to click 6 links just to get to the log output on a failed run like with that Azure thing. Puts me off enough so I mostly don't bother. | ||
timo | ^- exactly this | ||
El_Che | nine: I get Azure is bad. | ||
I am just saying that I tries OBS and ran | |||
d | |||
maybe someone else will have more affinity with it | 15:30 | ||
timo | i haven't tried OBS yet, and i could certainly use the exercise if i do have to run | ||
El_Che | (I have nothing against Suse) | ||
it's clear nine had a better experience with it | 15:31 | ||
[Coke] | thought: per release we can do the whole ecosystem. if we run multiple times per a release cycle, maybe we just do certain canary distros for fast feedback. | 15:34 | |
timo | or OBS just does the canary distros first and the rest later while we forget it even exists because we don't have to keep an eye on it | 15:35 | |
nine | Basically what I did with build.opensuse.org/project/show/ho...rakudo-git | 15:36 | |
El_Che | nine: so you choose on which distros it runs? | 15:38 | |
(I see a Fedora repo) | |||
last github | 15:39 | ||
oops | |||
timo | it's not limited to suse, it's cross-distro | 15:42 | |
El_Che | [Coke]: so where is the smoking repo? | ||
github actions free have a 50 minutes limits for runs | |||
but you can run lots in parallel | 15:43 | ||
[Coke] | El_Che: ? | ||
El_Che | [Coke]: the code you run to test the ecosystem | ||
[Coke] | El_Che: do you mean github.com/Raku/Blin? | ||
El_Che | ah ok, I thought it was replaced | ||
[Coke] | it depends on Whateverable (which is the same backend that the bisect bot uses) | 15:44 | |
El_Che | and what blin command do you run before a release? | ||
[Coke] | I think we're talking about the shape of what might replace it. | ||
El_Che | [Coke]: yes, sorry, I was confused | ||
[Coke] | that's on jdv, but something like "RAKULIB=lib bin/blin.p6" probably has sane defaults for a release run. | 15:45 | |
El_Che | just looking if something can be run in parallel | ||
thx | |||
[Coke] | I'm currently running "time RAKULIB=lib bin/blin.p6 --old=2024.08 --heartbeat=120.0 --new=fd309af89 --nproc-multiplier=1.5" | ||
by default it runs things across all available processors. | 15:46 | ||
El_Che | cpu sucks in most free places, run things in parallel is the trick | ||
[Coke] | This does that, but still everything is run on one single machine. | 15:47 | |
(it's not farmed out) | 15:48 | ||
El_Che | and you care only about 1 OS | 15:50 | |
so "install of the modules with this commit of rakudo (and create a report)" | 15:51 | ||
sorry for the basic questions, just trying to understand what the requirements are | |||
15:56
[Coke]_ joined,
[Coke] left
|
|||
[Coke]_ | yay, network blip here, and everything survived. | 15:56 | |
15:57
[Coke]_ is now known as [Coke]
|
|||
[Coke] | (except my nick) | 15:57 | |
afk for a bit | 15:58 | ||
jdv | wuts on me now? | 16:06 | |
coleman | We appreciate you, jdv | 16:50 | |
timo | jdv: i think just how exactly you run blin before a release | 16:51 | |
jdv | is that not my job as the release automaton? | 16:53 | |
:) | |||
timo | i wonder if it's an oversight that we only have byte-to-str and str-to-byte encoders/decoders and nothing like "filters" where you would put stuff like compression or maybe even encryption | 16:56 | |
for example with zstd, I can use the READ and WRITE and EOF methods of IO::Handle when deriving, then I'll have a handle that "wraps" or "connects to" another handle | 16:57 | ||
if we had something like filters, or a way to combine multiple encoder/decoder/filter things in a row then a handle could just switch its encoding over to "utf8 inside zstd" | 16:58 | ||
as it stands, i can implement "zstd and utf8" as one encoder / decoder and slot that into a Handle, though, including of course sockets | 17:00 | ||
because zstd is byte-to-byte and utf8 is byte-to-str or str-to-byte, and the interfaces for encoder and decoder do byte-to-str and str-to-byte | |||
ab5tract | timo: that sounds really cool | 17:39 | |
Would such filters be a reasonable place to hang protobuf or AVRO encoders? | 17:43 | ||
timo | hm, not so sure about protobuf, since that's more of an "object graph" kind of output | 17:46 | |
i haven't looked at avro at all yet | |||
ab5tract | AVRO is similar, you define the types of values to be found in different places in an object | 17:49 | |
I’m quite sleep deprived so don’t mind me if I’m way off base :) | |||
timo | yeah, something that outputs structured data is not quite as close a fit | 17:50 | |
another thing about encode streams and decode strings is feeding some amount of bytes/graphemes in at one end and getting as much as possible out at the other end | 17:51 | ||
with utf8 you may need to supply more bytes before a codepoint is finished, and you may need to supply enough bytes for more codepoints if there's combiners (or the possibility of combiners) | 17:52 | ||
17:57
nine left,
nine joined
|
|||
timo | who wants to see jitted function names in kcachegrind raise your hands up | 18:04 | |
[Coke] | ok, new record. new feature request: estimated completion time shown during the heartbeat. (x/y done, taken z minutes so far...) | 19:01 | |
19:17
vrurg_ joined,
vrurg left
19:20
vrurg joined
19:23
vrurg_ left
|
|||
[Coke] | (new *progress* record, I meant to include) | 19:24 | |
I am only 1/3 through and am already in the Sm's. | |||
timo | SMH my head | 19:56 | |
20:06
sena_kun joined
20:10
vrurg_ joined,
vrurg left
|
|||
nine | o/ | 20:23 | |
22:20
sena_kun left
|