github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:06 lizmat_ joined 00:09 lizmat left 00:36 lizmat_ left 00:37 lizmat joined 01:08 lizmat_ joined 01:12 lizmat left 01:36 Altai-man_ joined 01:37 lizmat_ left 01:38 lizmat joined 01:39 sena_kun left 02:09 lizmat_ joined 02:12 lizmat left 02:26 leont left 02:29 leont joined 02:39 leont left 02:40 lizmat joined 02:43 lizmat_ left 03:10 lizmat_ joined 03:12 travis-ci joined
travis-ci MoarVM build failed. Altai-man 'Merge pull request #1253 from MoarVM/2020.02.1-release 03:12
travis-ci.com/tbrowder/MoarVM/builds/151290341 github.com/tbrowder/MoarVM/compare...7317dd3a06
03:12 travis-ci left 03:14 lizmat left 03:37 sena_kun joined 03:38 Altai-man_ left 03:41 lizmat joined 03:45 lizmat_ left 04:12 lizmat_ joined 04:15 lizmat left 04:42 lizmat joined 04:46 lizmat_ left 05:13 sena_kun left, sena_kun joined, lizmat_ joined 05:16 lizmat left 05:36 Altai-man_ joined 05:38 sena_kun left 05:44 lizmat joined 05:47 lizmat_ left 06:15 lizmat_ joined 06:18 lizmat left 06:44 lizmat_ left 06:46 lizmat joined 07:16 lizmat_ joined 07:20 lizmat left 07:37 sena_kun joined 07:39 Altai-man_ left 07:47 lizmat joined 07:50 lizmat_ left 08:17 lizmat_ joined 08:20 lizmat left 08:48 lizmat joined 08:51 zakharyas joined, lizmat_ left 08:52 lizmat_ joined 08:56 lizmat left 09:07 lizmat_ is now known as limzat 09:08 limzat is now known as lizmat 09:36 Altai-man_ joined 09:39 sena_kun left
lizmat and yet another Rakudo Weekly hits the Net: rakudoweekly.blog/2020/03/02/2020-...ubenreuth/ 10:37
11:05 zakharyas left 11:07 zakharyas joined 11:31 Guest1277 joined 11:37 sena_kun joined 11:38 lizmat_ joined, Altai-man_ left 11:40 lizmat left 11:50 lizmat_ left 12:30 zakharyas left 12:48 leont joined 13:25 lucasb joined
Guest1277 is everyone suddenly hiding in Erlangen? 13:32
nwc10 not *yet* 13:33
leont NO :-( 13:36
13:36 Altai-man_ joined 13:38 sena_kun left
timotimo i'm not going, sorry 14:27
(also, probably good since i'm rather a little sick right now)
Kaiepi can someone review github.com/MoarVM/MoarVM/pull/1240 ? it makes builds less annoying when libtommath/libuv/whatever get updated on openbsd 14:28
14:31 zakharyas joined
nine At work we've got a core dump caused by a segfault in MVMSpeshLog's gc_mark function. At first I thought there's no way to find and fix this without being able to reproduce (the app was running fine all week), but maybe there is. 14:50
The segfault occurs because log->entries[i].invoke.sf points to inaccessible memory, i.e. that thing was probably already freed. 14:51
There are only 2 threads running with the spesh thread running into that segfault while the other thread triggered the GC run from MVM_frame_move_to_heap 14:53
Oh, the static frame that's missing is actually the initial_invoke of the spesh thread itself! 14:54
#7 0x00007f797c0b326c in MVM_interp_run (tc=0x532, tc@entry=0x561d64379d10, initial_invoke=0x176ae800f8, invoke_data=0x7f79762150b0, invoke_data@entry=0x561d6437b0a0) at src/core/interp.c:162
Guest1277 nine: is sf = ShadowFacts? 15:05
nine static frame 15:08
Guest1277 aha
Guest1277 goes back into lurking mode 15:09
nine I think it's just coincidence that the address matches that of initial_invoke. After all that will have been GCed and the memory reused. 15:15
timotimo well that's super weird :) 15:17
15:37 sena_kun joined 15:39 Altai-man_ left
nine Btw. for sure one of the lovliest systemd features: gist.github.com/niner/eeb3f329cb34...91fe4b9f93 15:58
Wait a minute... the segfault is in line 56 MVM_gc_worklist_add(tc, worklist, &(log->entries[i].invoke.sf)); 16:11
But: p log->entries[i]
$3 = {kind = 7, id = 341579, {entry = {sf = 0x176ae800f8, cs = 0x1b400000000}, param = {type = 0x176ae800f8, flags = 0, arg_idx = 436}, type = {type = 0x176ae800f8, flags = 0, bytecode_offset = 436}, value = {value = 0x176ae800f8, bytecode_offset = 0}, invoke = {sf = 0x176ae800f8, caller_is_outer = 0,
was_multi = 0, bytecode_offset = 436}, osr = {bytecode_offset = 1793589496}, plugin = {bytecode_offset = 1793589496, guard_index = 23}}} 16:12
The log entry has kind 7 which is MVM_SPESH_LOG_RETURN, not MVM_SPESH_LOG_INVOKE
How do we end up there?
timotimo i wonder if the call frame correlation id is valid? 16:15
m: say 341579.base(16)
evalable6 5364B
nine You mean the whole entry might be corrupted? 16:17
timotimo that doesn't really explain the odd source line position, though if you've got an optimized build, it could just be odd line annotations because of re-ordered code 16:18
i'm not skilled enough at asm reading to just go through the entire functino to figure out what's going on 16:19
nine Well from an assembly perspective, all the calls in that switch are actually identical, so the whole switch is superfluous. I guess the compiler is smart enough to notice this, too. 16:34
Which does make me wonder how it would be able to know which line number we're at exactly. My guess is: it just doesn't and takes the last one. 16:37
So it's really an inaccessible type in a spesh log. This has happened before. 16:38
16:49 zakharyas left
nine I guess my only chance at this point is to look at the nearby spesh log entries in the hope that this will give some indication of where the faulty one may come from 17:14
17:36 Altai-man_ joined 17:39 sena_kun left
vrurg Could be related to yours: "raku(35863,0x700009d55000) malloc: *** error for object 0x7fee7d18a800: pointer being realloc'd was not allocated" – Test::Mock threads.t fails with this regularly. 17:50
nine doesn't see a connection 17:54
18:01 Altai-man_ left 18:19 sena_kun joined 19:32 lizmat joined
japhb vrurg: See gist.github.com/japhb/8588c009de25...9cae15e905 and colabti.org/irclogger/irclogger_lo...02-28#l196 -- Test::Mock's tests appear to be discovering that something recent (post-2020.02) broke OO::Monitors. (Not as in "made it fail tests" but as in "made it stop preventing races".) 19:35
vrurg japhb: Can't get deeper into it, but to me it currently looks like something about moar. 19:36
19:38 AlexDaniel joined
vrurg japhb: First of all, it's the error I shown above. Also, would it be a problem with dispatching it would ruin the test totally because there is no room for flapping. 19:38
19:38 AlexDaniel left, AlexDaniel joined
japhb vrurg: jnthn had been guessing that something in the dispatch changes broke the way that OO::Monitors does method wrapping to manage locks, so that it appears to work but doesn't actually prevent thread races. 19:40
(Or maybe OO::Monitors doesn't detect the way that it fails to wrap now, and thus thinks it has done the right thing when it really hasn't.)
vrurg japhb: BTW, blin found nothing about OO::Monitors. Anyway, the current implementation is buggy. The fix is awaiting for a MoarVM PR to merge. 19:41
japhb vrurg: Right, OO::Monitor's own tests *don't* detect it. But for some reason, it seems Test::Mock's tests *do* (because Test::Mock is implemented as a monitor). That said, of course you could have found an additional failure mode/cause. :-) 19:42
vrurg Could be. Too many factors for now. 19:45
Meanwhile, somebody: github.com/MoarVM/MoarVM/pull/1252, please! :)
My future talk on TPC might depend on this PR... 19:46
Geth MoarVM: 947ebfcb0c | (Vadim Belman)++ | 10 files
Add nextdispatcherfor/takenextdispatcher ops

Provide support for Raku chained dispatchers. Their purpose is to pass information to the downstream dispatcher about what upstream dispatcher must take over the control next when downstream exhausts.
For example, when one wraps a candidate in a multi, the `WrapDispatcher` must know about the instance of `MultiDispatcher` to switch back to it when all wrappers are done and the candidate calls one of
  {next|call}{same|with}.
MoarVM: 0b4bdeccbc | (Vadim Belman)++ | 6 files
Merge branch 'master' into rakudo_3499
19:47
MoarVM: a7fa6daadd | (Elizabeth Mattijsen)++ (committed using GitHub Web editor) | 10 files
Merge pull request #1252 from vrurg/rakudo_3499

Add nextdispatcherfor/takenextdispatcher ops
lizmat vrurg: I assume you take care of the NQP bumping and stuff ?
vrurg lizmat++ Thanks alot! Sure, I will.
In any case, it's a chain of PRs.
vrurg is now away now. Making bookings for the trip to TPC. 19:49
20:20 Altai-man_ joined 20:23 sena_kun left 21:08 Kaiepi left 21:09 Kaiepi joined
MasterDuke it seems like maybe spesh logs aren't being freed up when run with --full-cleanup. how would that be done? 21:53
and/or spesh workers 21:56
22:05 lucasb left 22:22 sena_kun joined 22:23 Altai-man_ left 23:21 Altai-man_ joined 23:24 sena_kun left