00:36 AlexDaniel left 07:17 finanalyst joined 08:34 finanalyst left 08:58 lizmat left 08:59 lizmat joined 10:19 finanalyst joined 10:26 Pixi` joined 10:29 Pixi left 12:11 finanalyst left 12:19 finanalyst joined 12:25 finanalyst_ joined 12:28 finanalyst left 12:37 finanalyst__ joined 12:39 finanalyst_ left 12:44 Pixi` is now known as Pixi 12:52 finanalyst__ left
timo looking for suggestions what parts of moarvm would be interesting to run a fuzzer against. ideally it'd be a small part that can run very quickly and doesn't need to touch many pieces of moarvm. the streaming decoder was pretty much optimal in this respect. when fuzzing the utf8-c8 encoding, we get to exercise the streaming decoder as well as NFG synthetics, but with a nasty little hack we can 16:17
skip the need to spend time on GC for example.
something like fuzzing the nqp compiler binary on nqp code even with just --target=parse would likely be much, much too slow to get much of anything in terms of results ... gotta come up with a trick or two 16:23
lizmat I guess anything involving multithreading
I still have some tests in ParaSeq that will fail regularly
this *could* be that the Raku code is faulty... but it could be some gremlin somewhere 16:24
timo multithreading will be an additional bit of trickyness; having to do with stability
i need to look into how the "deterministic replay" feature of afl works first
thing with ParaSeq is that running a test so that it crashes takes a little while right? 16:28
lizmat yes... it does 16:29
timo you think i can get above 1000 runs per second on one core? :D
lizmat ah, no
hmmm.. I still think there's a gremlin lurking with module loading 16:30
timo that's going to be a little tricky
lizmat so maybe just a "use Test" ?
timo with AFL we can have a point where all kinds of setup are already done, before the first access to the test input is done 16:33
so we can mitigate at least a part of the startup time cost
lizmat but I would be very interested in the startup logic 16:34
timo the position of that spot would have to be placed on a per-fuzzing-campaaign basis 16:35
and also importantly, the input to the process should control as much of how things behave as possible, so I'm thinking for something like the ParaSeq thing, scheduling decisions would want to go in there so the fuzzer can play with when which thread does how much work 16:39
but if you can't get above 100 runs of the whole thing per second, you'll have a really hard time getting anywhere 16:40
it looks like i misunderstood how the persistent record/replay feature works and it's not meant to fix the kind of thing i was expecting it could 16:59
there's a few "deterministic multithreading" projects out there, but the two i've looked at so far both mentioned only handling multithreaded workloads that explicitly use locks, but we do have some lock-free datastructures in moarvm that I assume would cause the determinism to go out of the window again 17:17
disbot2 <melezhik.> timo: I asked the user who has this “resource temporary unavailable” issue to install Rakudo from RPM on their aarch64 vm, I am not if Rakudo true could be relevant to the issue … though 17:52
<melezhik.> I am not -> I am not sure
<melezhik.> true -> itself 17:53
<melezhik.> Sorry for typos
timo do we know what the exact version of rakudo was? 17:54
wow, building moarvm with tsan makes everything astoundingly slow 17:57
> Stage parse : 414.288 18:06
holy ...
disbot2 <melezhik.> 2026.02 18:08
<melezhik.> rpa.st/JMU4Q
<melezhik.> Example of stack trace
timo but you still want to set $*OUT's output buffer to 0? 18:09
disbot2 <melezhik.> Like I said it happens all the time after the same line that’s been printed before
<melezhik.> No , I removed it by your suggestion
timo ah, good. but that didn't change the behaviour? 18:10
disbot2 <melezhik.> Actually with or without it , it still fails
<melezhik.> No
<melezhik.> Did not
<melezhik.> You can see these two last commits - github.com/melezhik/Sparrow6/commits/rta-issue/ 18:11
<melezhik.> Also I did just in case here ( in sanitizer which process stdout from qemu and pipe it back to Sparrow ) - git.resf.org/testing/Sparky_Rocky/...7e0e638010 , still no luck 18:13
timo is the rpa.st you sent with or without the change? 77EAEC7A445986919283DEAE434F14237C92F7B8 is the hash of the source file there, but from the hash alone i don't know if that's with or without the buffering 18:14
disbot2 <melezhik.> with 18:16
timo sorry, "with or without buffering" is possibly backwards or at least confusing 18:17
does it have the code active that turns the buffering on stdout and stderr to 0?
disbot2 <melezhik.> I mean if we run code with output-buffer = 0; or we run code without output-buffer = 0 ; the issue is the same , with the same stack trace 18:18
timo ok good 18:19
disbot2 <melezhik.> Now they run the same tests with Rakudo 2026.03 from RPM, so will see
<melezhik.> Also they have ( FWIW ) NVMe hard disk drive 18:20
timo hm. actually ... could the program have outputted an ansi escape sequence that tells the terminal emulator to stop processing input?
can we get a completely byte-by-byte raw output? not sure if the thing pasted is from a file or from a etrminal? 18:21
disbot2 <melezhik.> From file
<melezhik.> Also remember - this is flapper 18:22
<melezhik.> The same test succeeds in average 4 out of 6
<melezhik.> So only 2 out of 6 fail in average 18:23
<melezhik.> I am not sure how we can get raw output as it redirected to file
<melezhik.> This is part of sparky scenario 18:24
18:24 librasteve_ joined
timo remember the "script" tool? 18:25
it does have the annoying property of "the user has to kill the bash inside of it" or whatever it was 18:26
disbot2 <melezhik.> Ok after rebuilding Rakudo from rpm the issue has gone 18:46
<melezhik.> Version 2026.03
librasteve_ rakudoweekly.blog/2026/04/06/2026-...lip-flops/
disbot2 <melezhik.> Run many times , not a single “resource temporary unavailable” 18:47
<melezhik.> I guess I will ask the user details on how he built Rakudo from source 18:48
<melezhik.> But I think this was “ Rakubrew build from source” option
<melezhik.> So hard to say where it’s 2026.02 issue or because he built from source 18:49
timo is the original build still around and can be re-tried?
an strace of the program when the issue happens might be helpful, but it will want to be limited to only some syscalls so it's not a gigabyte of data to sift through 18:50
disbot2 <melezhik.> timo: let me retract my “issue has gone” statement, more tests on fresh rpm Rakudo still show it 20:07
<melezhik.> rpa.st/52JPU
<melezhik.> And in “raw” format -rpa.st/raw/52JPU 20:08
timo I haven't verified that using utf8-c8 like you are using it now does not corrupt memory; just because it doesn't crash the same way it used to doesn't mean nothing is wrong 20:10
disbot2 <melezhik.> So this randomly happens every time right after this line - rpa.st/52JPU#1L85
<melezhik.> I generally agree I just want to fix the issue )) 20:11
timo it could be relevant that the "line" before that is huge
disbot2 <melezhik.> And suspiciously happens only on this specific machine , though we don’t have large stat on that , 20:12
<melezhik.> Absolutely 20:13
<melezhik.> If would find something I would appreciate if you write this in related gh issue , thanks 🙏 20:14
<melezhik.> AFK &
timo please prepare a version of Sparrow6 that uses latin1 for the Proc::Async when running the bash thing and for now ignore that the output will look a little bit like nonsense. give that version to your user and see if that changes anything about the behaviour
disbot2 <melezhik.> Ok will probably try tomorrow
20:47 finanalyst joined 21:37 finanalyst left
Geth setup-raku/dependabot/npm_and_yarn/vite-7.3.2: 4cd25f2712 | dependabot[bot]++ (committed using GitHub Web editor) | package-lock.json
Bump vite from 7.3.1 to 7.3.2

Bumps [vite](github.com/vitejs/vite/tree/HEAD/packages/vite) from 7.3.1 to 7.3.2.
  - [Release notes](github.com/vitejs/vite/releases)
  - [Changelog](github.com/vitejs/vite/blob/v7.3.2...NGELOG.md)
  - [Commits](github.com/vitejs/vite/commits/v7....ages/vite)
... (8 more lines)
21:42
setup-raku: dependabot[bot]++ created pull request #51:
Bump vite from 7.3.1 to 7.3.2
disbot2 <melezhik.> timo: “<timo> so just "say" won't be the right tool for the job there, you will have to either create a Buf with the whole line and $OUT.write it or switch between say and write and something for the newline at the end” so algorithm is this 1) call AsynProc with :enc<latin1> 2) iterate over “lines” in react / whenever block 3) in console-wo-prefix instead of say “$line” accumulate new chunks in Buff and only $OUT.write($buff); 22:10
$OUT.write(“\n”); when symbol of new line arrives to Buff , then if course empty Buff, to start all over again , is it right ?
timo when you have :enc<latin1> you'll still get Str in your whenever $blah.stdout.lines, so you wouldn't be accumulating Buf you would still be accumulating Str pieces, but every one of it is a whole line 22:28
but when you go to output these Str you will want to .encode("latin1") these strings to get a Buf and $*OUT.write rather than print or say because that takes Buf 22:29
btw, if you're splitting on lines, the entire line of output will only make it to the output after the newline was written, but i'm sure there's cases where no newline is written after something, like for a password entry or other kind of prompt, and we can see from the huge line in the qemu boot sometimes the same line is being rewritten over and over again, which looks like a single line to the 22:38
recipient
[Coke] we have a bad bot acting up in #raku - any channel ops around?
timo not a critical issue, just thought you should be aware
22:42 ChanServ sets mode: +o timo
Geth setup-raku: 1884ecb14a | dependabot[bot]++ (committed using GitHub Web editor) | package-lock.json
Bump vite from 7.3.1 to 7.3.2 (#51)

Bumps [vite](github.com/vitejs/vite/tree/HEAD/packages/vite) from 7.3.1 to 7.3.2.
  - [Release notes](github.com/vitejs/vite/releases)
  - [Changelog](github.com/vitejs/vite/blob/v7.3.2...NGELOG.md)
  - [Commits](github.com/vitejs/vite/commits/v7....ages/vite)
... (9 more lines)
23:50