[00:05] *** [Coke] joined [04:33] *** unicodable6 left [04:33] *** evalable6 left [04:33] *** releasable6 left [04:33] *** benchable6 left [04:33] *** quotable6 left [04:33] *** notable6 left [04:33] *** bloatable6 left [04:33] *** squashable6 left [04:33] *** shareable6 left [04:33] *** greppable6 left [04:33] *** nativecallable6 left [04:33] *** coverable6 left [04:33] *** bisectable6 left [04:33] *** statisfiable6 left [04:33] *** sourceable6 left [04:33] *** committable6 left [06:17] good *, #moarvm [07:19] *** AlexDaniel left [07:19] *** uzl[m] left [07:19] *** psydroid left [07:19] *** crystalfrost[m] left [07:24] *** AlexDaniel joined [07:28] Good * from NL as well [07:28] *** psydroid joined [07:28] *** uzl[m] joined [07:28] *** crystalfrost[m] joined [07:29] I think I found another race condition that can segfault, at an unexpected, yet rather hot place: [07:29] https://github.com/rakudo/rakudo/blob/master/src/core.c/stubs.pm6#L52 [07:29] aka nqp::atkey(GLOBAL.WHO,$pkgname) and nqp::atkey(PROCESS.WHO,$pkgname) [07:29] spotted with: MoarVM oops: MVM_str_hash_fetch_nocheck called with a stale hashtable pointer [07:29] at SETTING::src/core.c/stubs.pm6:38 [07:30] so it looks like we're going to need a lock on DYNAMIC :-( [07:31] or provide a threadsafe interface to fetching values from GLOBAL.WHO / PROCESS.WHO [08:29] meh... I guess this should be extended to all Stashes [08:36] and by consequence also to all ContainerDescriptors that initialize hash entries [08:46] *** MasterDuke left [08:47] jnthnwrthngtn nine ^^ [08:52] *** sena_kun left [08:53] *** sena_kun joined [10:07] *** sena_kun left [10:08] *** sena_kun joined [10:11] Well, we've already declared that one shouldn't expect Hash to be threadsafe, so I don't see it as *that* general a problem. Workflows invovling stashes arguably do need something. [10:14] like dynamic variables ? [10:15] m: dd PROCESS.WHO.^name [10:15] rakudo-moar 530e17848: OUTPUT: «"Stash"␤» [10:16] but apparently when compiling 6.d setting, PROCESS.WHO is a BootHash [10:16] is that some kind of HLLization going on ? [10:17] using Stash.$!lock.protect on Stash.AT-KEY and using that in DYNAMIC fails with: Method 'AT-KEY' not found for invocant of class 'BOOTHash' [11:05] *** psydroid left [11:05] *** uzl[m] left [11:05] *** AlexDaniel left [11:05] *** crystalfrost[m] left [11:47] Huh....I thought we had already solved that DYNAMIC initialization problem [11:56] made it less likely, but still not 100% [11:56] it's very rare, only been able to make it crash that way once [12:06] Ah, that was INITIALIZE-DYNAMIC which we made thread safe [12:08] At least we are going to need that lock only when accessing dynamics that are registered in GLOBAL or PROCESS [12:08] FWIW, I found a small race condition in Stash.ASSIGN-KEY, but the fix breaks things, so going to look at it again after finishing the weekly [12:10] I wonder why getdynlex is marked :noinline. It is using the frame walker, so I figure it should be safe to inline. Maybe we just didn't remove the marker when switching the implementation to the frame walker? [12:11] inlining DYNAMIC would be a good win [12:13] nine: I think the situation was that it started looking one caller out, and to do that it wanted a ->caller to get the starting point [12:14] If that's no longer the case and it uses frame walker from the current location then it probably can be relaxed [12:15] fwiw, could someone explain why the body of src/Perl6/BOOTSTRAP/EXPORTHOW.nqp is run at each Rakudo startup ? [12:22] MVM_spesh_frame_walker_init(tc, &fw, cur_frame, 0); [12:22] return MVM_frame_getdynlex_with_frame_walker(tc, &fw, name); [12:29] lizmat: can you please test MoarVM's inlineable_getdynlex branch? [12:49] *** AlexDaniel joined [12:59] *** psydroid joined [12:59] *** uzl[m] joined [12:59] *** crystalfrost[m] joined [14:56] *** ggoebel left [15:00] *** ggoebel joined [16:08] *** MasterDuke joined [16:10] nine: on inlinable_getlexdyn i get `+++ Compiling   gen/moar/stage1/MASTOps.moarvm [16:10] Missing infixish operator precedence at line 1085, near " $value un" [16:10]    at gen/moar/stage2/NQPHLL.nqp:1059  (src/vm/moar/stage0/NQPHLL.moarvm:panic) [16:10]  from gen/moar/stage2/NQPHLL.nqp:1317  (src/vm/moar/stage0/NQPHLL.moarvm:EXPR)` when building nqp [16:52] and yet another Rakudo Weekly News hits the Net: https://rakudoweekly.blog/2022/03/28/2022-13-roadmapping/ [17:28] *** sena_kun left [17:29] *** sena_kun joined [17:30] https://danluu.com/cgroup-throttling - interesting [17:47] for moarvm maybe mostly "make sure we're aware of cpu counts in container situations"; just today i stumbled upon a postgres that had been configured to use 16x as much ram as it has available, because assumedly the tuning tool saw only the host memory size, not what the container / cgroup had [17:51] *** sena_kun left [17:52] *** sena_kun joined [19:42] > The .NET runtime has had adaptive thread pool sizes for a decade, one of the many ways the .NET stack is more advanced than what we commonly see at trendy tech companies [20:10] timo: it is my understanding that MoarVM asks libuv about number of cores, and that libuv does the right thing wrt containers [20:13] that sounds very good already! [20:34] *** finanalyst joined [22:42] *** finanalyst left [23:40] *** raiph joined [23:41] *** raiph left