00:06 Kaiepi joined, reportable6 left 00:07 reportable6 joined 01:07 linkable6 left, evalable6 left 01:08 evalable6 joined, linkable6 joined 02:06 Kaiepi left 02:32 frost joined 02:48 londoed left 02:49 londoed joined, [Coke] left
Geth rakudo: vrurg++ created pull request #4929:
Fix cases where DESTROY is invoked on its own stack
03:10 [Coke] joined 04:07 Kaiepi joined 04:10 [Coke] left 04:56 vrurg left 04:57 vrurg joined 05:17 [Coke] joined 05:22 [Coke] left 06:07 reportable6 left 06:09 reportable6 joined 07:09 linkable6 left, evalable6 left 07:10 linkable6 joined 07:11 evalable6 joined
lizmat Files=1353, Tests=117175, 302 wallclock secs (36.79 usr 10.45 sys + 4254.83 cusr 351.30 csys = 4653.37 CPU) 07:26
that feels like significantly more :-( 07:27
vrurg ^^
test-t also feels slower :-( 07:28
07:39 [Coke] joined 07:44 [Coke] left 08:37 [Coke] joined
Geth RakudoPrereq/main: 6b025c0ae0 | (Elizabeth Mattijsen)++ | 13 files
Initial commit after rework
RakudoPrereq/main: d2db840687 | (Elizabeth Mattijsen)++ | Changes
Acme-Anguish/main: 16 commits pushed by (Zoffix Znet)++, (Samantha McVey)++, (Elizabeth Mattijsen)++
review: github.com/raku-community-modules/...77ce496e6f
09:07 [Coke] left
Geth Acme-Anguish/main: 89fad3f0c3 | (Elizabeth Mattijsen)++ | 18 files
Initial commit after rework
09:30 [Coke] joined 09:35 [Coke] left
Geth Acme-Anguish/main: cdffdb5600 | (Elizabeth Mattijsen)++ | .github/workflows/test.yml
No testing on Windows

Term::termios appears to have issues there
Acme-Anguish/main: e695197f09 | (Elizabeth Mattijsen)++ | Changes
Games-TauStation-DateTime/main: 9 commits pushed by (Zoffix Znet)++, (Elizabeth Mattijsen)++ 09:41
09:56 frost left 10:06 Altai-man joined 10:07 [Coke] joined 10:11 [Coke] left 10:12 frost joined 10:46 frost left
Geth rakudo/lizmat-Date-coercing: 9a2095b60b | (Elizabeth Mattijsen)++ | 2 files
Subclasses of .Date(Time) coercion

Calling .Date on a subclass of Date, and .DateTime on a subclass of DateTime, returned the object itself even if it was a subclass.
This commit makes sure that if you call .Date(Time) on a subclass of Date(Time), you will actually get the appropriate Date(Time) ... (7 more lines)
rakudo: lizmat++ created pull request #4930:
Subclasses of .Date(Time) coercion
Xliff \o, lizmat 10:57
Geth Games-TauStation-DateTime/main: 4dccdf5887 | (Elizabeth Mattijsen)++ | 24 files
First commit after rework
11:09 [Coke] joined
Geth Games-TauStation-DateTime/main: 7cfd832aa2 | (Elizabeth Mattijsen)++ | 10 files
11:15 frost joined 11:22 [Coke] left
Geth Trait-IO/main: f59ea11304 | (Elizabeth Mattijsen)++ | 13 files
Initial commit after rework
Trait-IO/main: 59d0f5b09d | (Elizabeth Mattijsen)++ | 2 files
Fix fragility in frame walking
Trait-IO/main: 38c3732e94 | (Elizabeth Mattijsen)++ | 3 files
12:06 reportable6 left 12:07 reportable6 joined
Geth rakudo/rakuast: 3f2d1b4cc4 | (Stefan Seifert)++ | 4 files
First support for heredocs in RakuAST
Trait-IO/main: 3e375e183f | (Elizabeth Mattijsen)++ | 2 files
Fix pod ytpo
Proc-Q/main: 41 commits pushed by (Zoffix Znet)++, (Tom Browder)++, (Elizabeth Mattijsen)++
review: github.com/raku-community-modules/...fbfc0223a1
13:08 [Coke] joined
[Tux] Hmmm 13:23
Rakudo v2022.04-63-g6bd19e408 (v6.d) on MoarVM 2022.04-3-ga9fcd5a74
csv-ip5xs0.784 - 0.806
csv-ip5xs-205.009 - 5.325
csv-parser3.680 - 3.828
csv-test-xs-200.399 - 0.408
test6.553 - 6.772
test-t1.502 - 1.615
test-t --race0.901 - 0.916
test-t-2020.811 - 21.447
test-t-20 --race6.625 - 6.675
Geth Proc-Q/main: 92073b2c5d | (Elizabeth Mattijsen)++ | 18 files
First commit after rework
vrurg lizmat: spectest before the PR: Files=1353, Tests=117150, 110 wallclock secs (34.40 usr 6.84 sys + 4386.50 cusr 341.82 csys = 4769.56 CPU)
lizmat: after: Files=1353, Tests=117152, 113 wallclock secs (36.46 usr 6.39 sys + 4554.99 cusr 345.61 csys = 4943.45 CPU)
lizmat 3% slower? 13:33
vrurg There is a little penalty because any await-based locking would be slower than Lock, and Lock::Soft is not an exception.
lizmat anything within noise level, I would consider a little penalty 13:34
vrurg cusr is better for measuring. It gives 3.8%. 13:35
I have no idea why test-t is slower. I don't think it's using a lot of symbol lookups. Though if startup time is included than it makes sense. 13:36
Unfortunately, the alternative to the slower Lock::Soft in CU and symbol lookups is the risk of blocking on too many concurrent `require`. 13:38
lizmat but the concurrent requires is really an artificial situation, is it not ? 13:39
vrurg And the blocking may happen even to code not using `require` explicitly — by using a module which is doing it internally.
lizmat: not, it isn't. Look inside LibXML. And my application is using it and may process 100+ HTML pages at once. 13:40
lizmat hmmm 13:41
vrurg I didn't hit the blocking case though, but when I started testing to find out where VMNull comes from it took me a couple of days to realize what happens to the test script.
[Tux] when I run it again, test-t still is over 1.5 13:44
vrurg The penalty comes not from `require` itself, but from Stash in first place. `require` is using ModuleLoader.merge_globals which is then uses Stash. This is just another place where it might block. Therefore I made Stash use Lock::Soft too.
[Tux]: do you use `time` to measure the timings? Up to my memory, you do. 13:45
[Tux] perl5's Time::HiRes 13:47
Geth Proc-Q/main: a91883874d | (Elizabeth Mattijsen)++ | t/01-basic.rakutest
Hopefully make tests pass on Windows
vrurg Whatever. It's the script execution time. Then it is most likely that it's the startup time which is affected, not the test itself.
[Tux] github.com/Tux/CSV/blob/master/time.pl#151 + github.com/Tux/CSV/blob/master/time.pl#156 13:49
In my reports, the middle column should reflect run-time (last column is runtime + startup/breakdown time) 13:50
vrurg You cannot measure the runtime _without_ all the module loading externally. 13:51
[Tux] I try to calculate that overhead by executing the test with an empty csv and than subtracting the time used for that from the final result 13:52
I know it is fuzzy
vrurg And it also depends on the filesystem being busy. 13:53
[Tux] true
vrurg It would be more interesting for the test script to measure the test time itself and report it for the runner somehow.
s/itself/on its own/
[Tux] patches welcome? 13:54
more columns?
vrurg Nah, no way for patches. I so much behind schedule on my job, which requires this PR...
But yes, one more column. 13:55
It would be really useful to know the startup times. Besides, the more Rakudo gets optimized the more the outcome of your testing would depend on the startup including module loading. 13:56
lizmat vrurg: making the test script faster doesn't fix the general issue
which also manifests itself in increased spectest times
vrurg lizmat: I'm not proposing to make it faster. I'm proposing more data. 13:57
lizmat more data to do what?
sorry that I seem a bit miffed... but I spent a *lot* of time making Raku a few % faster, to only see that disappear :-( 13:58
vrurg To see both startup times and the actual test runs. The second will provide timing directly related to VM optimizations.
lizmat: remember new-disp was introduced with some startup penalty? 13:59
lizmat yes
vrurg So... :)
lizmat at least that had the promise of improvements... I don't see that here (yet) 14:00
vrurg I mean, it's ok to explain to a user what makes Lock different from Lock::Async and why too many threads with Lock may block. But explaining why `require` is not thread-safe and why a heavily-multithreaded HTML processing blocks without any feedback – it's different. 14:01
[Tux] vrurg: more data is test-t-20 : it has 20x the number of data rorws compared to test-t
14:03 [Coke] left
vrurg [Tux]: I meant more data on timing. I.e. not to try running the test against an empty CSV, but have the immediate information about script's start to end time _and_ the actual test code time. 14:03
[Tux] ah, oke, but I do not have that data (yet) 14:04
vrurg I know. :) That's why I proposed time-t (and other tests?) to report it to the runner somehow. Say, use a specially formatted diag message which can be then located in the output and parsed. 14:05
Geth Proc-Q/main: c42f9a4f65 | (Elizabeth Mattijsen)++ | 2 files
Revert test changes, don't bother with Windows

Apparently Windows doesn't like Proc::Q
vrurg lizmat: with regard to gaining back some of the performance, there are things to be done at some point anyway like optimizing the `await` (mostly the thread releasing/resuming part, I think); caching symbol lookups on the call site to avoid many calls to .WHO/Stash; and optimizing the phasers, especially the LEAVE chain. 14:09
lizmat vrurg: but then you don't understand the raison d'etre of test-t: it is supposed to be *including* startup, showing that it is a viable thing for quick scripts
I've optimized the LEAVE chain recently
especially if it is a single LEAVE phaser 14:10
vrurg lizmat: but I don't propose to exclude that data. Only to have it all available.
14:11 [Coke] joined
vrurg There are also optimizations to be done on VM side. And fixes to make it possible to use `return`/`fail` in code blocks passed to `protect` methods. 14:11
lizmat could you sketch those out? 14:12
[Tux] I misinformed you about the columns: the middle column is the time used: this is the total time of the test from which I subtract the minimum of three runs on an empty CSV
the last column is the same, but for a second run. per row, the fastests of the two runs is always on the left, the slowest on the right 14:13
vrurg lizmat: as I said above, not any time soon. I've already lost more than a week fixing `require`.
And still have to resolve some minor issues with LibXML. 14:14
And then start doing the new task which I was given with exactly a week ago. :) 14:15
Geth Proc-Q/main: 62944528e9 | (Elizabeth Mattijsen)++ | Changes
14:19 [Coke]_ joined
vrurg [Tux]: Here is approximately what I mean: gist.github.com/vrurg/29df9c1ce39f...b1b41f4291 14:19
It's seen that all script time is .5, but the test as such took .016. So, the rest is for startup. 14:21
Geth Proxee/main: 20 commits pushed by (Zoffix Znet)++, (Elizabeth Mattijsen)++
review: github.com/raku-community-modules/...ccd524eb0f
14:22 [Coke] left
vrurg [Tux]: added a modification to report module loading too. Pretty informative. :) 14:23
14:24 [Coke] joined
vrurg lizmat: before I'm off to do some work today, eventually :) – the only thing to be done to recover some of the performance is replacing Lock::Soft with Lock everywhere. CU would still be thread-safe up to the moment where too many concurrent module loading happens. 14:25
14:26 [Coke]_ left
vrurg lizmat: But as I said above, it's all ok until heavy-lifters like Gtk+LibXML+heavy concurrency kick in. Then we would have to do something about it. 14:27
And, BTW, I still don't understand why do we limit number of threads in ThreadPoolScheduler anyway? What's reason?]
vrurg is afk for a couple of hours.
[Tux] But to include that in the report(s), would require rewriting all 41 tests
14:33 frost left
vrurg [Tux]: I only suggest. It's up to you wether you can/want or not. 14:33
14:34 [Coke]_ joined 14:38 [Coke] left 14:40 [Coke] joined 14:43 [Coke]_ left 14:45 [Coke]_ joined 14:49 [Coke] left
nine ThreadPoolScheduler is simply full of heuristics. In general you want about as many busy threads as there are cpu cores as more threads would only increase overhead and memory usage but not improve performance. Knowing if a thread is really processing is hard though. 14:54
14:55 [Coke] joined
vrurg nine: a locked thread doesn't consume CPU. 14:57
Besides, I would leave it up to the user to decide how many runing threads they want.
14:58 [Coke]_ left
Geth Proxee/main: f0bd2be8db | (Elizabeth Mattijsen)++ | 13 files
First commit after rework
Proxee/main: 297a650f27 | (Elizabeth Mattijsen)++ | 2 files
Testing tweaks
nine vrurg: and how would the user control the number of threads? 15:02
vrurg: and of course a thread waiting for a clock does not busy the CPU. But how does the scheduler know that this thread is currently waiting? 15:03
vrurg nine: Control would depend on their task. Thread status could be reported by VM.
nine And what if the thread is blocked in some native code? How would the VM know? 15:04
Geth Proxee/main: 02cb99ee0f | (Elizabeth Mattijsen)++ | 2 files
vrurg nine: But why do we have to care in first place? I don't see a reason in limiting a user in what they can do. 15:06
I don't understand how too many locks blocking the whole application are better than unlimited thread pool? And it's not the first time I hit that blocking. 15:07
nine In what way do we limit the user?
vrurg my $l = Lock.new; my $p = Promise.new; for ^$*KERNEL.cpu-cores * 10 { start { $l.protect: { await $p; say "ok" } } }; start $l.protect: { $p.keep }; say "done"; 15:09
evalable6 done
vrurg my $l = Lock.new; my $p = Promise.new; for ^$*KERNEL.cpu-cores * 10 { start { $l.protect: { await $p; say "ok" } } }; await start $l.protect: { $p.keep }; say "done"; 15:10
15:10 [Coke]_ joined
nine So? 15:10
vrurg This is a simplified version of what happens sometimes. The pool gets exhausted.
nine Yes, but in what way do we limit the user?
vrurg So, we enforce a developer to find a workaround
lizmat s/en// ? 15:11
vrurg See, the second version times out? Because of the limited number of threads.
15:12 camelia left
vrurg I mean, instead of using CPU resources they way they want it, some would have to think of how to get around the limit. Or raise to some huge value which is almost the same as have it unlimited. 15:13
15:13 [Coke] left, camelia joined
vrurg And I don't see what problem is solved by the limitation? 15:13
nine So just raise the limit to what's required by your workload? $*SCHEDULER = ThreadPoolScheduler.new(:max-threads(1000)); my $l = Lock.new; my $p = Promise.new; for ^$*KERNEL.cpu-cores * 10 { start { $l.protect: { await $p; say "ok" } } }; await start $l.protect: { $p.keep }; say "done"; 15:14
What's the problem with that?
The user does have full control over how many threads the scheduler allows for. They can even replace the whole scheduler with an implementation that better fits their workload. So I ask again: in what way are they limited? 15:16
vrurg nine: Can you make it _unlimited_? No, because the $.max_threads is integer. 15:17
nine Btw. your example dead locks even with a single thread
And what's the practical difference between unlimited and 9223372036854775808 threads? And again: you can replace the whole scheduler with one that just starts a thread for every single task. Where is the limit? 15:18
vrurg The problem is when using a third-part modules where normally one doesn't know the implementation details. The limited pool might hit the user unexpectedly and it takes really long to debug the problem.
15:19 linkable6 left
vrurg nine: All approached requires the user to _know_ about the problem in first place. This raises the threshold of learning. 15:19
15:20 [Coke] joined
vrurg And I still don't understand what problem does it solve? 15:20
I mean, if some limitation is used – there is a reason behind it. I don't see a reason.
Geth Subset-Helper/main: 20 commits pushed by (Zoffix Znet)++, (Samantha McVey)++, (Elizabeth Mattijsen)++, (JJ Merelo)++
review: github.com/raku-community-modules/...815baeb354
nine And without the limit other users will run into the problem of too many threads getting started yielding suboptimal performance in the best case and running out of memory in the worst case. Neither exactly a joy to debug.
15:20 linkable6 joined
nine You're basically arguing "the current default is bad for my specific use case, so it must be bad in general and needs to be adapted to my specific use case" 15:21
vrurg It's much easier to find out about too many threads or memory problem than to track down a deadlock. Especially when the only possible way to debug is to build with debug and use rakudo-gdb-m 15:22
That's how I had to do it. Plus one has to know about MVM_dump_backtrace(tc).
nine I don't see my computer grinding to a halt because it's swapping to death all that easy to debug. 15:23
vrurg nine: Nah, I argue because I'm not the only one. I had to explain this case previously twice or thrice.
nine And how many people would suffer from your solution?
lizmat So, wouldn't the spesh thread be able to see that there is a deadlock ?
nine Such numbers are only useful if you can put them into context.
vrurg But swapping is an way more apparent reason.
15:23 [Coke]_ left
nine lizmat: I don't see how 15:23
lizmat not getting logs from threads ? 15:24
nine A thread busily running speshed code won't submit any logs either
vrurg BTW, speaking about the scheduler – we don't have a readily available one. And writing own? Not for a too good reason.
lizmat vrurg?? 15:25
we have a scheduler ??
nine A configurable one, precicely because the default cannot be perfect for everyone
vrurg lizmat: nine suggested a user writing a 1-to-1 thread scheduler which would just start a thread.
nine: I think best would be to make unlimited option available with ThreadPoolScheduler. 15:26
nine I did not suggest this. I said it's possible as an argument against your claim that we limit the user in some way. A claim that you have not substantiated so far.
But it is? ThreadPoolScheduler.new(:max-threads(2^63))
vrurg In this case it's much easier to debug a complain about deadlocking code by advising to try the unlimited option. 15:27
My aesthetics feelings cry of that variant. :) 15:28
BTW, it's max_threads. 15:29
nine Any value > 2^47 is exactly equivalent to unlimited, as you won't be able to start more threads anyway (and I doubt you can even start 2^47 even on a highly theoretical machine with a full 64 bit address space)
vrurg I didn't know about 2^47 limit. But it's OS-dependent anyway, I guess? Anyway, I just have thought about just adding :unlimited which would simply translate into :max_threads(2^63) or something like this. 15:31
lizmat vrurg: I'll make a PR for that
vrurg Have to go now. Very productive brainstorming anyway. 15:32
nine Without the context of this discussion, I wouldn't know what ThreadPoolScheduler.new(:unlimited) means. So it'd have to be :unlimited_threads or so. And then we need to prohibit setting :unlimited_threads, :max_threads(4) 15:33
vrurg lizmat: The last thing. I would probably consider going back to use Lock. Need to try a couple of things first.
lizmat I was more thinking: :max_threads<unlimited> 15:34
15:34 [Coke]_ joined
[Coke]_ lizmat: let me know if you need someone to run windows tests for modules 15:34
(would love it if we had a windows blin run) 15:35
nine But max_threads is an Int()
lizmat [Coke]_: well, Term::termios appears to have issues
nine: there's TWEAK :-)
15:37 [Coke] left
nine Still makes me cringe. The parameter has a type constraint - which is good. Allowing to pass anything that doesn't fit the constraint is never good design 15:38
lizmat then maybe :max_threads(Inf) or :max_threads(*) ? 15:44
patrickb o/ 15:48
tellable6 2022-05-02T01:47:47Z #raku <melezhik> patrickb sparkyci did see your new commits in DevelExecRunerGenerator, I restarted the daemon and new build succeded - sparrowhub.io:2222/report/346 , I am working on SparkyCI stability ...
15:50 [Coke] joined 15:54 [Coke]_ left 15:55 [Coke]_ joined
patrickb When I run the rakubrew.org server with the heap snapshot active and the thing reserves >10GB of RAM after ~10 refreshes and the resulting mvmheap file is 202MB and Comma tells me the heap was at most 102MB, what can I conclude? That it's not HLL memory that's leaking, right? 15:57
15:57 [Coke] left
Geth rakudo: 59d0787177 | (Elizabeth Mattijsen)++ | src/core.c/ThreadPoolScheduler.pm6
Make ThreadPoolSchedule.(initial|max)_threads uints

Because we can and it should probably help a bit in performance
16:00 [Coke] joined 16:02 [Coke]_ left
patrickb How could I track those 10GB then? I haven't done any C level memory leak search up to now. Can someone list the tools that I should read up on for that? 16:04
nine patrickb: how do you know it reserves >10GB? 16:06
patrickb htop 16:12
nine But can htop tell you how much memory a process really uses? And what definition does it use for that? 16:13
16:15 [Coke]_ joined
patrickb Good questions. But given that the actual memory utilization of the system goes up it can't be only memory mapped stuff or similar. 16:16
github.com/rakudo/rakudo/issues/42...1128734396 <- Has a graph. 16:17
16:18 [Coke]_ left, [Coke]_ joined, [Coke] left, [Coke]_ is now known as [Coke] 16:25 [Coke]_ joined 16:28 [Coke] left 16:40 [Coke] joined 16:43 [Coke]_ left 16:50 [Coke]_ joined 16:54 [Coke] left 16:55 [Coke] joined 16:59 [Coke]_ left 17:05 [Coke]_ joined 17:08 [Coke] left 17:09 Altai-man left 17:11 [Coke] joined 17:14 [Coke]_ left 17:16 [Coke]_ joined 17:19 [Coke] left 17:21 [Coke] joined 17:23 [Coke]_ left
patrickb valgrind says memory was not leaked. So I guess the memory is still referenced. 18:00
So the moar heap profiler doesn't see it, valgrind doesn't see it. What to do next? Maybe strace to see where in the code all those mallocs come from? 18:03
googling "linux memory profiler" has heaptrack at the top 18:05
18:07 reportable6 left, reportable6 joined
patrickb Finally. After compiling moar with --no-mimalloc heaptrack can finally see the gigabytes of allocated memory 18:24
Geth rakudo: 556f1a2a08 | (Elizabeth Mattijsen)++ | src/core.c/ThreadPoolScheduler.pm6
(General|Timer)Worker don't need a queue attribute

The closure on the argument is enough. Also use TWEAK instead of BUILD.
patrickb copy_to reserves all those GBs. I guess that's not helpful at all. I don't even know which copy_to that is! 18:29
lizmat copy_to does not occur in any of the Rakudo code base... nor in NQP? 18:37
patrickb copy_to is part of the reprs in MoarVM 18:55
it's a C function
18:57 [Coke] left 18:59 qorg11 left 19:02 [Coke] joined 19:03 qorg11 joined 19:10 [Coke]_ joined
MasterDuke patrickb: fyi, the next release of heaptrack should include support for mimalloc, so you won't need to build moarvm with --no-mimalloc 19:12
19:14 [Coke] left 19:15 [Coke]_ is now known as [Coke] 19:20 [Coke]_ joined 19:24 [Coke] left 19:26 [Coke] joined 19:30 [Coke]_ left 19:41 [Coke]_ joined 19:45 [Coke] left 19:56 [Coke] joined 19:59 [Coke]_ left 20:12 [Coke] left 20:25 [Coke] joined 20:27 [Coke]_ joined 20:30 linkable6 left 20:31 [Coke] left, linkable6 joined 20:32 [Coke]_ is now known as [Coke] 20:36 [Coke]_ joined 20:39 [Coke] left 20:42 [Coke] joined 20:46 [Coke]_ left 20:47 [Coke]_ joined 20:49 [Coke] left
Geth rakudo/lizmat-unlimited-threads: c645eeb51e | (Elizabeth Mattijsen)++ | src/core.c/ThreadPoolScheduler.pm6
Add :max_threads(*|Inf) as option to ThreadPoolScheduler

For those of us brave enough to not want to be stopped by a maximum number of OS threads. Specifying * or Inf will internally store the value 9223372036854775807 (aka the current maximum value for an uint attribute). The accessor will return Inf if this value is found.
rakudo: lizmat++ created pull request #4931:
Add :max_threads(*|Inf) as option to ThreadPoolScheduler
21:02 [Coke] joined 21:05 [Coke]_ left 21:11 discord-raku-bot left, discord-raku-bot joined, [Coke]_ joined 21:14 [Coke] left 21:27 [Coke] joined 21:31 [Coke]_ left 21:33 [Coke]_ joined 21:36 [Coke] left 21:42 [Coke] joined 21:45 [Coke]_ left 21:47 [Coke]_ joined 21:50 [Coke] left 22:02 [Coke] joined 22:06 [Coke]_ left 22:17 [Coke]_ joined 22:20 [Coke] left 22:23 [Coke] joined 22:26 [Coke]_ left 22:32 [Coke]_ joined 22:35 [Coke] left 22:47 [Coke] joined 22:50 [Coke]_ left 22:58 [Coke]_ joined 23:00 [Coke] left 23:12 [Coke] joined 23:15 [Coke]_ left 23:18 [Coke]_ joined 23:21 [Coke] left 23:56 [Coke]_ is now known as [Coke]