github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:44 tobs left 00:55 lucasb left 01:03 tobs joined 03:32 shareable6 left, bloatable6 left, committable6 left, benchable6 left, quotable6 left, sourceable6 left, bisectable6 left, squashable6 left, statisfiable6 left, coverable6 left, unicodable6 left, notable6 left, reportable6 left, releasable6 left 03:33 committable6 joined, shareable6 joined, sourceable6 joined, squashable6 joined 03:34 unicodable6 joined, coverable6 joined, bloatable6 joined, bisectable6 joined, reportable6 joined 03:35 notable6 joined, benchable6 joined, releasable6 joined 03:36 quotable6 joined, statisfiable6 joined 04:57 sena_kun joined 05:24 sena_kun left 05:35 colomon left 05:40 colomon joined 06:32 squashable6 left 06:35 squashable6 joined 07:28 domidumont joined 08:09 vesper11 left 08:14 vesper11 joined 09:04 zakharyas joined 10:35 sena_kun joined 10:54 squashable6 left 10:55 sena_kun left 10:57 squashable6 joined 11:10 sena_kun joined 11:32 domidumont left 11:54 zakharyas left 12:05 vrurg_ joined, vrurg left 12:13 vrurg_ left 12:14 vrurg joined 12:30 vrurg left, vrurg joined 12:35 vrurg left 12:36 vrurg joined 12:45 Kaiepi joined 12:46 vrurg left 12:47 vrurg joined 12:54 sena_kun left 13:06 vrurg_ joined, vrurg left 13:08 sena_kun joined 13:13 lucasb joined 13:23 AlexDaniel` left 13:29 vrurg_ left, vrurg joined 13:39 vrurg left 13:41 vrurg joined 13:42 vrurg left 13:43 vrurg joined 13:45 vrurg left 13:46 vrurg joined 13:49 vrurg_ joined 13:51 vrurg left 14:14 xiaomiao left 14:15 xiaomiao joined 14:44 zakharyas joined 14:45 vrurg_ is now known as vrurg 14:46 zakharyas left 14:49 zakharyas joined 14:53 sena_kun left 15:11 sena_kun joined 16:52 japhb left 16:55 sena_kun left 17:05 japhb joined 17:10 sena_kun joined 17:41 AlexDaniel` joined 17:49 zakharyas left 18:08 zakharyas joined
dogbert17 .seen nine 18:31
tellable6 dogbert17, I saw nine 2019-12-17T17:37:33Z in #moarvm: <nine> But now I have to leave for some fitness training
18:54 sena_kun left
nine I'm here 19:02
19:04 zakharyas left 19:09 sena_kun joined 19:12 MasterDuke joined
dogbert17 nine: cool, did you see my gist from yesterday? 19:16
nine dogbert17: I did and it's puzzling... but in the realm of possible consequences of a GC issue 19:44
dogbert17 a stupid question, is it possible that 'sc' and/or sc->body has changed when we come to line 2848? github.com/MoarVM/MoarVM/blob/mast...on.c#L2848 19:51
nine That's not a stupid question at all. And it may even be exactly what happens. MVMSerializationContext is itself just a 6model object and an MVMCollectable, so it is possible that it gets moved by the GC 20:04
MasterDuke nine: if you see the conversation over in #raku, moarvm throws an exception in stage mbc when trying to run a 100k line script of just variable declarations
nine Now the MVM_gc_allocate_gen2_default_set(tc) call would ordinarily prevent the GC from running, but that flag gets cleared when running the callback during deserialization of a NativeCall site 20:05
dogbert17: can you try if MVMROOTing sc makes the error go away? 20:06
MasterDuke which is a little unrealistic, but i don't know what exactly is the problem (i.e., whether it's 100k lines at all, or 100k different variable declarations), but rakudo/gen/moar/CORE.c.setting is 68k lines, so getting close to that problematic number
dogbert17 nine: I'll give it a go 20:16
20:16 jjatria left, jjatria joined, MasterDuke left
dogbert17 nine: MVMRooting sc around line 2835 doesn't help, it still fails on line 2848 20:23
20:25 zakharyas joined 20:26 zakharyas left 20:32 MasterDuke joined
MasterDuke ha. so 70k lines throws an exception. 60k lines froze my computer and i had to hard-reset. tried again and watched with top, it gets to stage moar and then memory use just starts skyrocketing. i manually killed it at 22gb res 20:34
20:35 zakharyas joined
MasterDuke looks like it might be spesh 20:44
nine dogbert17: did you root it from 2835 to 2846? 20:48
MasterDuke huh, heaptrack thinks copy_guard_set in src/spesh/plugin.c is allocating tons and tons of ram, but disabling spesh doesn't help 20:49
dogbert17 yes and no :) at first only around 2835 but later all the way to before 2848 after that it started to panic insted
something about an item owned by zero added to worklist 20:51
dogbert17 tries again
MoarVM panic: Invalid owner in item added to GC worklist 20:52
20:55 sena_kun left
MasterDuke why is spesh plugin stuff getting called even when i run with MVM_SPESH_DISABLE=1? 20:58
nine The naming may just be a bit confusing. 20:59
There are interpreter implementations of those ops
MasterDuke ?? copy_guard_set() isn't an op though 21:00
nine dogbert17: you do see change at least :) That's usually better than just no change at all 21:03
dogbert17: so what object is it complaining about?
MasterDuke: copy_guard_set is used by update_state which is used by add_resolution_to_guard_set which is used by call_resolver which is used by MVM_spesh_plugin_resolve which is used by the speshresolve op 21:05
There's also an MVM_spesh_plugin_resolve_spesh for actually speshed code
dogbert17 nine: I'll see if I can figure that out but gdb is plying trix on me atm for unknown reasons 21:07
MasterDuke nine: huh. fwiw, result->num_guards in copy_guard_set only ranges between 2 and 9, it just gets called a lot 21:08
nine MasterDuke: I'd not be surprised if some algorithm in stage more just doesn't scale well 21:09
I didn't think much about huge source files :) 21:10
MasterDuke i don't think it's stage moar. at least a time gets printed for that stage and then after that things go crazy
in this case it can't just be the number of lines, since this is less than the core setting. i'm assuming it's the number of variables created 21:11
21:13 sena_kun joined
dogbert17 #0 MVM_panic (exitCode=1, messageFormat=0x7ffff76706a8 "Invalid owner in item added to GC worklist") at src/core/exceptions.c:833 21:15
#1 0x00007ffff74d3fe7 in MVM_gc_root_add_temps_to_worklist (tc=0x603d60, worklist=0x3972030, snapshot=0x0) at src/gc/roots.c:285
seems to be this line: MVM_gc_worklist_add(tc, worklist, temproots[i]); btw, 'i' is equal to 0 21:20
is there some MVM_dump_object thingy available?
MasterDuke result->body.num_positions does get as high as 6865 in update_state() 21:21
heh. i printed out result->body.num_positions in update_state(). at first there are a couple small numbers, bounces back and forth with values between 1 and 9. then eventually it starts at 1 and increments to 6865. and update_state() is running a loop up to result->body.num_positions. no wonder there's memory growth 21:27
nine dogbert17: there's MVM_dump_p6opaque(tc, obj) but usually I just use something like `p *temproots[i]` to get an idea about what object I'm dealing with 21:36
dogbert17 (gdb) p **temproots[i] 21:39
$3 = {sc_forward_u = {forwarder = 0x7fffffffbfd0, sc = {sc_idx = 4294950864, idx = 32767}, st = 0x7fffffffbfd0}, owner = 4149001387, flags = 32767, size = 0}
looks nonsensical
nine absolutely
is it the rooted sc? 21:40
dogbert17 hmm, let's see 21:42
just to check, you wanted me to end the MVMRoot after this line: github.com/MoarVM/MoarVM/blob/mast...on.c#L2846 21:45
if I do that then valgrind suddenly gets quite upset 21:48
==23285== Conditional jump or move depends on uninitialised value(s) 21:49
==23285== at 0x509C802: is_stack_frame (roots.c:275)
==23285== by 0x509C88E: MVM_gc_root_add_temps_to_worklist (roots.c:284)
21:55 Kaeipi joined, Kaiepi left
nine dogbert17: that's...odd 21:56
but yes, that was my suggestion
dogbert17 I wonder what's going on 21:58
22:02 zakharyas left
nine Maybe the sc is already broken when its ROOTed? 22:02
dogbert17 might be worth investigating 22:03
22:04 Merfont joined, Kaeipi left
nine I wonder if it'd be a better approach to unify the 3 nested runloop implementations first. I know that the code in NativeCall.c is broken - just like the one in nativecall_dyncall.c was 22:04
dogbert17 aha 22:08
nine To be clear: I don't see an immediate connection between the deopt issue I fixed and the GC issue 22:09
dogbert17 perhaps this is a job for rr 22:16
nine definitely worth a try 22:17
need to go to bed now...good night! And good luck hunting!
dogbert17 Good night. Unfortunately I'm unable to run rr on my Ryzen system :( 22:20
22:44 Merfont left, Merfont joined 22:54 sena_kun left 23:09 sena_kun joined 23:12 Merfont left 23:16 Kaiepi joined 23:54 MasterDuke left