Welcome to the main channel on the development of MoarVM, a virtual machine for NQP and Rakudo (moarvm.org). This channel is being logged for historical purposes. Set by lizmat on 24 May 2021. |
|||
00:08
reportable6 left,
reportable6 joined
00:26
kaipei joined
00:29
kaiepi left
01:37
frost joined
02:37
evalable6 left,
linkable6 left
02:39
evalable6 joined,
linkable6 joined
04:27
kaipei left
04:28
Kaiepi joined
05:28
linkable6 left,
quotable6 left,
nativecallable6 left,
reportable6 left,
sourceable6 left,
bloatable6 left,
coverable6 left,
tellable6 left,
unicodable6 left,
releasable6 left,
committable6 left,
benchable6 left,
bisectable6 left,
shareable6 left,
statisfiable6 left,
evalable6 left,
notable6 left,
greppable6 left,
nativecallable6 joined,
unicodable6 joined,
tellable6 joined
05:29
committable6 joined,
greppable6 joined,
bisectable6 joined,
reportable6 joined,
releasable6 joined
05:30
quotable6 joined,
bloatable6 joined,
sourceable6 joined,
notable6 joined
05:31
linkable6 joined,
evalable6 joined,
statisfiable6 joined,
benchable6 joined,
shareable6 joined,
coverable6 joined
06:07
reportable6 left
06:10
reportable6 joined
06:25
sena_kun joined
06:27
Altai-man left
|
|||
Nicholas | [* GOOD *] | 06:47 | |
nine | Good Friday! | 07:17 | |
08:42
linkable6 left
08:44
linkable6 joined
09:41
Kaiepi left
10:06
Altai-man joined
10:20
Kaiepi joined
|
|||
nine | How on earth could github.com/rakudo/rakudo/commit/5e...42359b185f have broken a callsame test_ | 10:29 | |
? | |||
jnthnwrthngtn | I can't guess that one, sorry. | 10:34 | |
Short of "it never should have worked but this bug hid another bug" | |||
nine | I noticed that the same bug is in pop-scope as well | ||
jnthnwrthngtn: pushed a few more commits. Only regression is that callsame test | 10:46 | ||
Huh! This works as advertised: multi foo(Numeric $a) { say "Numeric" }; multi foo(Int $a) { say "Int"; callsame; }; foo(1) | 10:51 | ||
This doesn't: { multi foo(Numeric $a) { say "Numeric" }; multi foo(Int $a) { say "Int"; callsame; }; foo(1) } | |||
Ok, fixed that one :) | 11:55 | ||
12:02
frost left
12:03
frost joined
12:06
reportable6 left
12:07
reportable6 joined
12:15
frost left
12:17
frost joined
12:35
Kaiepi left
12:40
linkable6 left
12:43
linkable6 joined
12:49
Kaiepi joined
13:13
frost left
|
|||
nine | jnthnwrthngtn: should .& get its own Call node type or treated as syntactic sugar for RakuAST::Call::Name? | 13:35 | |
jnthnwrthngtn | I'd expect a different node type | 13:42 | |
And it'd inherit from RakuAST::Call::Methodish | 13:44 | ||
See the three impls of it here for other kinds of method call: github.com/rakudo/rakudo/blob/f542...kumod#L217 | |||
nine | Ok, so like the RakuAST::Call::Safe I've added | 13:45 | |
What's the name of the .& construct? I.e. what's the appropriate name for the AST node? | 13:48 | ||
13:49
Kaiepi left
|
|||
jnthnwrthngtn | Hm, I suspect Safe should be a name with the word Method in it too (MaybeMethod perhaps) | 13:54 | |
VarMethod perhaps for the .& one | |||
nine | Docs call .? "Safe call operator". So maybe SafeMethod? | 13:56 | |
jnthnwrthngtn | Yeah, I guess | 13:59 | |
I jsut like the "call me maybe" reference :P | |||
nine | Ah, in that case :D | 14:01 | |
[Coke] | so, following our naming convention from years past, then. :) | 14:05 | |
nine | nine@sphinx:~/rakudo (rakuast *=)> RAKUDO_RAKUAST=1 ./rakudo-m -e '1.&say;' | 14:10 | |
1 | |||
jnthnwrthngtn | yay | 14:11 | |
nine | Cool! Test.rakumod is not getting all the way to Stage qast. There it probably fails because of missing nqp ops support | 14:21 | |
japhb | *is now? | 14:22 | |
jnthnwrthngtn | Is not? Or is now? | ||
I mean, I achieved it not getting there :D | |||
nine | ah, now of course :) | ||
14:22
Kaiepi joined
|
|||
nine | Yep: compiling complex call names NYI nqp::setbuffersizefh | 14:23 | |
> RAKUDO_RAKUAST=1 ./rakudo-m -e 'use nqp; nqp::say("hello")' | 14:53 | ||
hello | |||
That was much easier than expected. Provided that we don't want special AST nodes for nqp ops | |||
jnthnwrthngtn | Probably not, otherwise we get into the "can construct an AST that dumps the same but behaves different" situation | 15:49 | |
Which I'm trying to avoid where we can | |||
I suspect the next thing you'll run into if it gets past stage QAST is that I didn't pay any attention to if we're producing precomp or not, which entails producing slightly different QAST | 15:50 | ||
16:44
Altai-man left
18:07
reportable6 left
18:10
reportable6 joined
18:17
Techcable left
18:42
Kaiepi left
18:44
Kaiepi joined,
Techcable joined
19:29
discord-raku-bot left,
discord-raku-bot joined
19:52
Kaiepi left,
Kaiepi joined
|
|||
vrurg | jnthnwrthngtn: since Lock::Async is your creation, it's a question for you. :) How hard would it be to implement the same approach to recursion, as with Lock? | 20:22 | |
20:26
[Coke] left,
[Coke] joined
21:34
[Coke] left,
[Coke] joined
22:02
[Coke] left
22:08
[Coke] joined
|
|||
jnthnwrthngtn | vrurg: Lock relies upon the acquisition and release of a lock to be on the same thread; if you violate this rule, it's at best an exception and your program is in an undefined state - e.g. such a mistake may as well be fatal. | 23:11 | |
vrurg: await will fall back to blocking behavior if the current thread holds locks, rather than the non-blocking behavior | 23:12 | ||
vrurg | jnthnwrthngtn: this I'm working on right now. | ||
Working on the per-thread thing. | 23:13 | ||
jnthnwrthngtn | By contrast, Lock::Async is explicitly designed for the lock to be acquired/released over different threads. | ||
The recursion mechanism of Lock is dependent on the lock acquire/release being on one thread. | |||
Working on...what? | |||
Lock::Async does have some kind of recursion coping mechanism, btw; see some of the other methods in there. | 23:14 | ||
vrurg | Async interchange of phrases created confusion. :) Let me explain. | ||
japhb | .oO( "No, there is too much. Let me sum up." ) |
23:15 | |
vrurg | I'm considering replacing Locks in CompUnit::* with Lock::Async. It is designed based on recursive nature of Lock. So, I'm considering a way to overcome it. | ||
jnthnwrthngtn | At the end of the day, lock recursion depends upon some idea of "identity" to know if it's recursion or not, and for Lock that's the thread ID. | ||
For Lock::Async there's no obvious equivalent. | 23:16 | ||
vrurg | The reason for replacement is to reduce the risk of `require` invoked on many modules from many threads concurrently to block ā something I experienced in my testing script. It just never precompiles. | 23:17 | |
I tried increasing number of threads in the pool but the results was quite discouraging: 12G of res mem, 150G+ of virtual and counting. I just never had enough guts to wait all the way to the end of any kind. | 23:18 | ||
jnthnwrthngtn | It deadlocks, or? | ||
vrurg | It's not deadlocking as such. Just all threads are blocked and none is willing to give up ā Lock, you know. | ||
So, 20 modules to `require` and precompile, 100 threads per each module ā and voila! Lesser number of threads do basically the same. | 23:19 | ||
jnthnwrthngtn | Isn't that the definition of deadlock?? | ||
japhb | It's also chewing RAM, so there's at least *something* going on ... | 23:20 | |
jnthnwrthngtn | I guess. | ||
vrurg | Yes and now, to my view. Exhausted pool of threads is somewhat different from the classical deadlocking. | ||
jnthnwrthngtn | where does the "100 threads per each module" come from? | 23:21 | |
vrurg | github.com/rakudo/rakudo/pull/4917 ā the script here. I was trying to fix a VMNull popping up from `require` and ::($mod) racing each other. So, made it real stress-testing. | 23:22 | |
Since the original production code is requiring a few modules, so I tried to reproduce that too. | 23:23 | ||
BTW, rewinding to the start and Lock depending on one thread ā I'm considering adding some kind of ID, likely numeric, to continuations created by ThreadPoolScheduler. | 23:25 | ||
jnthnwrthngtn | Are, like of like a "logical" thread ID? | 23:26 | |
Something like that could work | |||
vrurg | Yep, exactly. | ||
With that ID Lock::Async could just use the same holder queue, but identify nesting based on the ID. | 23:27 | ||
jnthnwrthngtn | That said, I don't think Lock::Async relies upon `await` being used, since it's interface is in terms of a Promise | ||
vrurg | And while not totally make CU free of chances to dead lock, but at least make them more predictable. | 23:28 | |
jnthnwrthngtn | Maybe, but something still feels a bit off to me that it struggles at the moment. | ||
vrurg | Sorry, don't get the part about relying upon 'await' | 23:29 | |
23:29
Kaiepi left
|
|||
jnthnwrthngtn | $async-lock.lock() returns a `Promise`. One could choose to `await` it, which under contention will do the continuation dance when possible | 23:29 | |
vrurg | Oh, BTW, the deadlock actually happens in `react` handling Proc::Async for the compiling rakudo child. The precompile happens, but the react never gets its result. | 23:30 | |
jnthnwrthngtn | However, one can $async-lock.lock.then(...) also | ||
Oh. So we're trying to precompile many things concurrently? | |||
vrurg | Yes, that's what happens with the script on its first run after wiping out .precomp | 23:31 | |
jnthnwrthngtn | Would a simpler solution not be to permit only one `use`/`require`/`need` at a time, thus forcing their serialization? | ||
I mean, I'd expect they have to be anyway to end up with a change of a consistent global state? | |||
vrurg | And slow down the precompile? | ||
jnthnwrthngtn | *chance | ||
vrurg | Say, the 20 modules are required ā 20 processes are started. | 23:32 | |
I have serialized per-module precomp already. | |||
jnthnwrthngtn | The the majority of programs do `use`, and we have to process those sequentially. | ||
Not to mention modules are pre-compiled upon installation | |||
vrurg | But they're re-compiled when rakudo is updated, for example. | 23:33 | |
'use' is a simple case. | |||
But `require` is often used as a workaround for circular deps. | |||
And the worst part it is used internally my many modules, without a user knowing about that. | 23:34 | ||
jnthnwrthngtn | .oO( I hope I never have to work about a Raku codebase that is doing that... ) |
||
*work on | |||
23:35
Kaiepi joined
|
|||
vrurg | I'm having the issue with LibXML, where cross-dependency is a natural thing. It has to use `require`. | 23:35 | |
jnthnwrthngtn | Well, if we still want to parallelize `require`'s pre-comp to a degree, it'd probably be better to make the degree explicit and serialize it beyond that? | 23:36 | |
vrurg | And whenever I parse many pages at once I might hit that case of many thread with many `require` | ||
At the moment I don't see how to do it properly. | 23:37 | ||
The serialization, I mean. | |||
The whole moment when I realized I'd have to deep my toe into CU was a complete surprise. I never wanted this to happen. :) | 23:38 | ||
And I barely understand all the details of it, so it would take too long to rework it properly. Not sure how much time do I have for this. | 23:39 | ||
jnthnwrthngtn | That's pretty much the same for Lock::Async and recursion; I went as far as "just document you can't" :) | ||
vrurg | It feels like an easier task to me. I see the light with the logical thread ID. | ||
jnthnwrthngtn | I think I'd prefer it to be a differnet type than added to Lock::Async, unless you can be sure the performance impact is nothing | 23:40 | |
Which I think you can't | |||
Because anything that slows down Lock::Async slows down everything involving Supply | 23:41 | ||
vrurg | My only big worry is that Lock::Async is slow. Much slower than Lock. But considering the CU is largely built around slow IO it may be not that bad. | ||
It must not slow it down. Similar to protect-or-queue-... this would either be a separate method, or a new candidate for `lock` like $async-lock.lock(:recursive). | 23:42 | ||
jnthnwrthngtn | Lock is much slower if you exhaust the thread pool and deadlock :) | 23:43 | |
vrurg | An additional attribute to track the ID might add some little cost though. | ||
jnthnwrthngtn | Yeah, thus why I suggest a subclass or something | ||
Is Lock::Async actaully a lot slower in the absence of contention, btw? | 23:44 | ||
vrurg | I was considering a new type, but didn't find a good name for it. Lock::Async is pretty much self-explaining. Besides, it already has the necessary infrastructure, which is mostly hidden in private methods. | ||
jnthnwrthngtn | In the case of non-contention it's meant to be a cas and returning a constant already kept Promise, iirc, which I'd hope is fairly cheap | 23:45 | |
vrurg | Not sure about the contention. I did some benchmarks long ago, but don't remember the results. | ||
But it is clearly loosing to Lock when competing for CPU. | |||
jnthnwrthngtn | Well, Lock::Async was pretty much designed for supporting the Supply concurrency model, and Supply is generally about IO-bound, so I can imagine it being less good for CPU-bound | 23:46 | |
vrurg | I see your point... I think you're right and `use` must not be affected too much. | ||
As well as the most often case of non-competing require. | 23:47 | ||
Ok, I will try to contain it within Lock::Async and see how much of its infrastructure is going to be used. If not much then extracting into a separate type would make more sense. | 23:48 | ||
And even if it wouldn't solve my issue the functionality itself would be good to have anyway. | 23:49 | ||
Thanks! | 23:50 | ||
jnthnwrthngtn | Welcome, good luck :) | ||
fwiw, logial "thread" IDs might well be a really nice thing for tooling too | 23:51 | ||
vrurg | Can you suggest a name for the dynamic variable? | 23:53 | |
I thought of $*CONTINUATION-ID | 23:54 |