github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:00 reportable6 left 00:04 reportable6 joined 00:50 Voldenet joined, Voldenet left, Voldenet joined 00:55 lucasb left 01:45 Voldenet left 01:51 Voldenet joined, Voldenet left, Voldenet joined 05:44 AlexDaniel joined 06:00 reportable6 left, reportable6 joined 06:35 brrt joined 07:00 brrt left 07:14 patrickb joined
Geth MoarVM: 00a1ddb9f9 | (Stefan Seifert)++ | src/core/frame.c
Fix GC processing uninitialized data in autoclosing of frames

Deserialization of objects may cause allocation while we're still initializing the frame's env array. This would make the GC process uninitialized data leading to...interesting behaviour. Fix by zeroing out the newly allocated env array before initializing it with the actual data.
07:27
nine This one affected a couple of socket/async tests. IIRC those are also ones that like to flap. 07:28
Geth MoarVM: 606c443726 | (Stefan Seifert)++ | src/io/asyncsocketudp.c
Fix possible access to fromspace in UDP socket setup

async_task (and with it t) may be moved by an untimely GC triggered by allocation of arr.
07:37
07:56 zakharyas joined
Geth MoarVM: 78164189d6 | (Stefan Seifert)++ | src/strings/unicode_ops.c
Remove bogus MVMROOT from MVM_unicode_string_compare

a_ci and b_ci are both stack allocated and not even MVMCollectables. Also MVM_string_ci_init doesn't collect anyway. So there's no need for an MVMROOT and worse, debug checks or future code changes may trip over this.
07:59
nine First time I've encountered a harmful MVMROOT :)
08:02 robertle joined
lizmat nine++ # cool work 08:06
08:32 domidumont joined 08:33 domidumont left
Guest37021 nine++, very cool, I suspect the autoclose bug is a perfect match for the gist you commented on yesterday 09:46
Geth MoarVM: bca6f39248 | (Stefan Seifert)++ | src/strings/ops.c
Fix possible access to fromspace when concatenating strings with many strands

collapse_strands may allocate, so we need to MVMROOT the string that we don't collapse.
09:56
nine Guest37021: oh, indeed! I didn't even notice :)
But it shows that it's indeed better to stick to my process and not get side tracked into remote debugging sessions. This was hard enough to figure out with all the tools at hand 09:57
10:03 sena_kun joined
nine Next up: "process_worklist: Assertion `*item_ptr != item->sc_forward_u.forwarder' failed." 10:03
Appears in 2 spec test files, both using atomicint. Coincidence? 10:04
Guest37021 ah, that's a classic, have seen that many times
nine That's gonna be interesting. I have no idea how to approach this. It's a completely different kind of error 10:08
10:09 brrt joined
brrt \o 10:11
Guest37021 hello brrt 10:15
brrt ohai Guest37021 / dogbert17 10:19
nine Btw. one of the spec tests is still running with 836 CPU minutes and counting... 10:20
brrt tf
Guest37021 is it related to slip? 10:21
nine Seems to be an endless loop in MVM_string_gi_get_grapheme during string concatenation 10:23
Guest37021 oops 10:24
nine Cannot reproduce that one, so it's either another flapper, or fixed by one of my commits today 10:25
Oh, but now I got "Internal error, string corruption in iterate_gi_into_string" from it. It's t/spec/S02-magicals/env.rakudo.moar btw
Guest37021 FWIW, asan and valgrind are silent when running that one 10:29
nine I think, we can get more out of those when MVM_free()ing the nursery at each collection, rather than re-using it. 10:30
Guest37021 agreed 10:31
nine nwc10: do you happen to remember the purpose of this assert you added back in 2014? github.com/MoarVM/MoarVM/blob/mast...ect.c#L218 Wouldn't this just happen if e.g. a variable is MVMROOTed multiple times? 10:47
Guest37021 it would be quite impressive if he does remember :) 10:53
Guest37021 tends to forget stuff written a few weeks earlier 10:57
11:14 zakharyas left
nine As little sense as that assertion makes to me, it seems to be onto something. When I remove it, I get a different error and it involves an object that looks suspiciously like the one the assertion complained about 11:46
It's an Int in a Scalar 11:48
and the Scalar is already in gen2
And the Int belongs to thread 8 (the current one) while the Scalar belongs to thread 1 11:59
12:00 reportable6 left
nine I bet that's somehow significant. Otherwise I wouldn't know why this error appears only in multi threaded tests 12:00
timotimo perhaps something the cross-thread-write log could figure out
nine what's that? 12:01
timotimo justā„¢ turn on MVM_CTW_LOG in the environment or something and it'll spit out whenever a thread writes to an object that belongs to another thread
can point out when you're accidentally sharing data between threads
12:04 reportable6 joined
nine My MoarVM source doesn't know anything about MVM_CTW_LOG 12:04
timotimo MVM_CROSS_THREAD_WRITE_LOG Log unprotected cross-thread object writes to stderr 12:05
less short than i remembered
nine Yeah, getting stuff like: Thread 4 bound to an attribute of an object (ThreadPoolScheduler::GeneralWorker) allocated by thread 1 at SETTING::src/core/ThreadPoolScheduler.pm6:237 (/home/nine/rakudo/install/share/perl6/runtime/CORE.setting.moarvm:run-one) 12:06
timotimo there are going to be many false-positives 12:07
there's a big list of things that it doesn't report to reduce spam
nine It doesn't seem to catch this particular write 12:19
timotimo i wonder if some recent change to how scalars work wasn't yet reflected in the CTW instrumentation code perhaps? 12:20
but a lot of that is only for stuff that doesn't escape, so it wouldn't go across threads anyway
12:44 zakharyas joined
nine This...is complicated. Thread 5 adds a nursery object (the Int) to the worklist which is referenced by a Scalar from thread 1 while thread 1's GC is actually stolen by thread 3 12:46
12:47 discord6 joined, discord6 left, discord6 joined
timotimo all of that should be fine 12:48
gcs pass work to each other when they encounter objects that belong to other threads
at least that's how i understand it
that's the "in trays"
12:53 sena_kun left
nine Wait...it complains about a pointer to past fromspace. But the pointer is 0x7fffd80a6788. tc->nursery_alloc is 0x7fffe80fae70 and tc->nursery_alloc_limit is 0x7fffe810ae70 12:59
So....the address is actually not in the nursery at all?
So how can this condition trigger? if ((char *)*item_to_add >= (char *)tc->nursery_alloc && (char *)*item_to_add < (char *)tc->nursery_alloc_limit) MVM_panic(1, "Adding pointer %p to past fromspace to GC worklist", *item_to_add); 13:00
nwc10 nine: network here is still terrible 13:02
so, about 90 minutes later i have a connection again
oh, for 6 seconds?
that line was changed in commit 92f262c7787c434789604e31f32006f4b16694d5
I think added in 0635912d4b9dfe8c95a84e9185e82f6cd6588991 13:03
I think that the assert realttes to this comment in that commit 13:04
Also assert that if the pointer is in to-space already, it is never marked as
gen2.
13:19 pamplemousse joined 13:27 sena_kun joined 13:39 lucasb joined 14:52 brrt left 15:06 patrickb left
Kaiepi anyone on good at using libuv? 15:23
i'm making it so moarvm keeps track of whether ipv6 actually works and when connecting to an ipv6 address with AF_UNSPEC specified returns EHOSTUNREACH or ETIMEDOUT (which usually indicate either the firewall or a network interface's ipv6 address are misconfigured), it retries connecting using an ipv4 address. i have it working for sync sockets, but there's an error i can't figure out how to fix when i do the same for async sockets 15:26
15:30 robertle left
Kaiepi timotimo? 15:31
actually wait, i have an idea. lemme test it first 15:41
15:58 brrt joined
nwc10 brrt: I read morepypy.blogspot.com/2019/07/pypy...rch64.html and noticed "If someone has access to a beefy enough (16G) ARM64 server and is willing to give us access to it, we are happy to redo the benchmarks on a real machine." 16:00
brrt nwc10: as it happens, I don't have an 16G ARM64 server :-) 16:03
would if I could...
nwc10 I don't either. I was planning on buying a 4G ARM64 board soon-ish 16:05
brrt oh, but hey, someone replied with 'I can haz ARM64'
16:05 evalable6 left 16:08 evalable6 joined 16:14 evalable6 left 16:16 evalable6 joined 16:17 evalable6 left 16:22 evalable6 joined 16:26 brrt left 16:27 chloekek joined 16:32 pamplemousse left 16:39 dogbert17 joined 16:50 evalable6 left 16:51 evalable6 joined, evalable6 left 16:52 evalable6 joined
Geth MoarVM/master: 4 commits pushed by ZhongnianTao++, (Samantha McVey)++ 16:55
17:04 evalable6 left 17:06 evalable6 joined 17:10 evalable6 left 17:12 evalable6 joined, evalable6 left 17:13 evalable6 joined 17:55 zakharyas left 17:58 robertle joined 18:00 reportable6 left 18:02 pamplemousse joined 18:04 reportable6 joined
Kaiepi got retrying connections over ipv4 when ipv6 fails for reasons that are the user's fault working for async sockets! 18:08
timotimo that is very useful 19:03
ugexe it is always the users fault 19:17
19:46 zakharyas joined 19:47 pamplemousse left 19:48 pamplemousse joined 19:52 pamplemousse left
nine Oh boy, that's devious. Between the debug check and the actual MVM_panic because of the failed check, the item causing the failure gets moved! That's why the address doesn't appear to be in the nursery. Because it isn't anymore. 20:44
This means that another thread is actually moving the item. 20:45
20:46 zakharyas left
nine And that is why the original assertion failed! The assertion is about checking that an object's forwarder is not the same as the object. But the way the assertion is phrased, it reads the variable holding the pointer again and compares that to the forwarder of the item in the nursery. The variable was already updated, so it must point at the same adress as the forwarder 20:58
I just fear I lack the knowledge about how the GC and threads are supposed to interact to fix this 21:03
21:10 pamplemousse joined 21:13 chloekek left
timotimo ouchies 21:14
21:15 pamplemousse left
Geth MoarVM: Kaiepi++ created pull request #1152:
[IP6NS Grant] Retry connecting over IPv4 when IPv6 connections fail in certain cases
21:34
Kaiepi codefactor pls 4 lines of assigning variables isn't complex 21:45
23:11 sena_kun left 23:48 lucasb left