[00:07] *** frost-lab joined [00:31] *** frost-lab left [00:40] *** mrmurman joined [00:43] *** mrmurman left [00:43] *** sivoais left [00:48] cog: you mean like nested functions? Those aren't a good idea anywhere [01:08] *** frost-lab joined [01:16] Yea, nested functions. [01:18] those require an executable stack to generate 'closure' thunks. Not pretty. I wouldn't use them under any circumstances [01:28] Anyway, I find they are not standard, which answer my question [01:28] * answers [02:33] *** Geth left [02:33] *** Geth joined [03:33] *** greppable6 left [03:33] *** linkable6 left [03:33] *** committable6 left [03:33] *** sourceable6 left [03:33] *** nativecallable6 left [03:33] *** quotable6 left [03:33] *** tellable6 left [03:33] *** evalable6 left [03:33] *** unicodable6 left [03:33] *** benchable6 left [03:33] *** squashable6 left [03:33] *** shareable6 left [03:33] *** coverable6 left [03:33] *** bloatable6 left [03:33] *** notable6 left [03:33] *** releasable6 left [03:33] *** bisectable6 left [03:33] *** statisfiable6 left [03:34] *** quotable6 joined [03:34] *** nativecallable6 joined [03:34] *** bloatable6 joined [03:34] *** tellable6 joined [03:34] *** greppable6 joined [03:34] *** sourceable6 joined [03:34] *** squashable6 joined [03:34] *** releasable6 joined [03:34] *** bisectable6 joined [03:35] *** benchable6 joined [03:35] *** evalable6 joined [03:35] *** statisfiable6 joined [03:36] *** notable6 joined [03:36] *** shareable6 joined [03:36] *** committable6 joined [03:36] *** unicodable6 joined [03:36] *** coverable6 joined [03:36] *** linkable6 joined [05:04] *** squashable6 left [05:05] *** squashable6 joined [05:05] *** squashable6 left [05:06] *** squashable6 joined [08:24] Happy new *, #moarvm [08:53] *** sena_kun joined [09:14] Happy ++$!year [09:58] *** sena_kun left [10:16] *** sena_kun joined [11:08] ¦ MoarVM: JJ++ created pull request #1415: Helps build from scratch [11:08] ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/1415 [11:56] *** Altai-man joined [11:59] *** sena_kun left [12:06] Happy New Year to all the MoarVM folk! [14:07] *** linkable6 left [14:07] *** evalable6 left [14:08] *** evalable6 joined [14:08] *** linkable6 joined [14:14] ¦ MoarVM: aa3f015b49 | (Juan Julián Merelo Guervós)++ (committed using GitHub Web editor) | README.markdown [14:14] ¦ MoarVM: Helps build from scratch [14:14] ¦ MoarVM: [14:14] ¦ MoarVM: After trying to build it, I realized that `make` does not copy to `prefix/bin`, so that's included. Also, this might be unnecessary, but it would be good to clarify that it's not a prefix for the executable, but for everything. [14:14] ¦ MoarVM: Also, not totally clear make install is actually working. Working on it right now. [14:14] ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/aa3f015b49 [14:14] ¦ MoarVM: f36eaf0a68 | (Jonathan Worthington)++ (committed using GitHub Web editor) | README.markdown [14:14] ¦ MoarVM: Merge pull request #1415 from JJ/patch-1 [14:14] ¦ MoarVM: [14:14] ¦ MoarVM: Helps build from scratch [14:14] ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/f36eaf0a68 [15:09] *** evalable6 left [15:09] *** linkable6 left [15:11] *** evalable6 joined [15:12] *** linkable6 joined [15:57] *** sena_kun joined [15:59] *** Altai-man left [16:02] *** zakharyas joined [16:35] *** patrickb joined [17:48] Trying to make it through make test with --asan is a lot lot harder than expected. [17:49] To get through anything without asan complaining (rightfully), one needs to run with --full-cleanup. But that currently results in aborts if any background threads are running (e.g. the ThreadPoolScheduler is used) as we don't clean them up. [17:50] The abort comes from uv_mutex_destroy which gets an error from pthread_mutex_destroy because destroying a mutex that's currently in use as a condition variable is not allowed. [17:51] I'd been consistently running with ASAN_OPTIONS=detect_leaks=0 [17:51] and decided that "fix the leaks" was a task for the future [17:51] I hadn't even realised that there was the problem you described [17:51] But I'd actually like it to find leaks :) See the asan_fixes branch I pushed yesterday. That's 2 actual memory leaks fixed [17:52] I had assumed that some of the leaks I saw was because the spesh thread didn't get to stop and tidy up. (I might be wrong on this guess. I did not look hard) [17:52] With those and the 3 --full-cleanup fixed it contains I get through the rakudo build and far enough to even run make test, which crashes and burns [17:52] I'd been looking for user-after-free and similar errors. leaks were for later. [17:52] Yes, a lot of the reports are simply because of the spesh thread [17:52] *** hankache joined [17:53] The leak in MVM_unicode_name_to_property_code is as straight forward as they come [17:53] Oh, ab training starts [17:59] Anyway I have made some progress on cleaning up those threads. Just wonder what jnthn will say to that. [18:19] nine: So far as cleanup goes: with --full-cleanup, sure, do all we can, aim to leak nothing. For normal exit, just let the OS do it, since it's gotta do the work anyway [18:19] If we want an env var for full cleanup too, or to enable it by default when we configure with --asan, that's fine by me also. [18:28] So far as mutexes and so on go, the only general solution I know of is to not use OS-provided ones for actual locking. Instead, as the JVM does, just provide a way for a thread to put itself to sleep ("park") and for another thread to awaken it later. Then implement locks in userspace. No prizes for guessing why I've not hurried to do this. :) [18:29] The thread pool scheduler is just one way we can come upon a mutex / cond var. So far as I know, it's not just a mutex waiting on a cond var, it's a mutex that's locked at all. [18:29] (That causes the abort if you try to destroy it) [18:31] Adding some kind of "shutdown" functionality to the TPS may be workable, and could be useful by itself, but I expect it would come with a long list of caveats [18:35] jnthn: so far I've added a flag to ConcBlockingQueue set by MVM_vm_destroy_instance. The queue's shift impl checks this flag and returns VMNull if set. That can be checked in TPS by a cheap while !nqp::isnull(my \task = nqp::shift($queue)) [18:37] That shift impl also set the new blocking_queue field in ThreadContext (I assume a thread can only block on one queue at a time), so MVM_vm_destroy_instance can signal those mutexes for the background threads [18:38] So far the total expense for the non-full-cleanup case is that nqp::isnull, a check of the ended flag, setting that blocking_queue field and one more branch in ConcBlockingQueue's shift. I think that's reasonable. [18:39] Hm, the flag makes every ConcBlockingQueue a bit bigger, though. Could this not have been done by pushing a sentinel value into the queue? [18:39] What I don't know yet is how to signal the supervisor thread. That one just sleeps [18:39] (Which would well be a `null`) [18:40] nine: Just wait for it to wake up and also check some "shutting down" flag in the TPS and exit if it's set? [18:40] I'd need to push a VMNull for every thread listening on the queue [18:40] We know how many there are, though. Or, whenever anything sees a null it just sends another one to replace what it consumes, and exits. [18:41] The difficulty is in setting that flag from low level code in the VM. [18:42] But maybe the flag can be set by an exiting worker thread instead [18:42] Oh, I was figuring that it'd be the high level code coordinating [18:43] Though that maybe gets interesting too. Hm. [18:43] HLL code doesn't know whether we want full cleanup or not [18:44] Hm, true [18:44] That's only signified by the main program calling MVM_vm_destroy_instance vs. MVM_vm_exit [18:44] The more I think about it, the more attrative the JVM approach gets :P [18:45] (I think at some level I've long realized it's the better way to solve this, it's just a bit argggh to do) [19:01] It's almost as if the JVM developers knew a thing about their field... [19:02] jnthn: "I know this is what I'm going to want eventually but I'mma keep putting it off" is totally a thing :D [19:06] If I'm *really* lucky I'll put it off so long that somebody else gets tired of waiting for it :) [19:08] *** sena_kun left [19:30] *** squashable6 left [19:30] *** hankache left [19:33] *** squashable6 joined [19:41] Nevertheless I'm gonna continue on the current course tomorrow. Worst case it ends up as a branch helpful to people running ASAN. [20:39] *** Voldenet left [20:45] *** Voldenet joined [20:45] *** Voldenet left [20:45] *** Voldenet joined [21:06] *** sena_kun joined [21:48] *** hankache joined [22:45] *** zakharyas left [22:50] *** hankache left [23:29] ¦ MoarVM: patrickbkr++ created pull request #1416: Switch spawnprocasync to use a separate arg for the program name [23:29] ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/1416 [23:41] *** sena_kun left [23:41] *** patrickb left