github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:06 Kaiepi left 00:12 Kaiepi joined 00:44 lucasb left 00:48 MasterDuke joined, MasterDuke left, MasterDuke joined 05:31 domidumont joined 05:49 domidumont left 06:10 domidumont joined 06:31 AlexDaniel` left, Garland_g[m] left 06:43 AlexDaniel` joined 07:21 Garland_g[m] joined 08:10 zakharyas joined 08:17 robertle_ joined
jnthn morning, #moarvm o/ 09:20
timotimo greetings
i'm now in the amusing situation where i have to find out why one measurement says a few thousand bytes were freed from the nursery and another says a few ten thousand are, or something like that 09:21
interesting. i looked at the wrong keys, yet it doesn't add up still 09:22
does cleared bytes also count unmanaged bytes, i wonder 09:23
09:52 domidumont left 09:53 Garland_g[m] left 09:56 AlexDaniel` left 10:04 AlexDaniel` joined 10:40 Garland_g[m] joined 11:00 zakharyas left 11:44 domidumont joined 12:50 zakharyas joined
Geth MoarVM: a353260539 | (Jonathan Worthington)++ | 2 files
Fix MIN/MAX redefinition warnings
13:03
13:09 robertle_ left 13:27 Guest16965 joined
Guest16965 jnthn: are you doing fixes again? (dogbert17) 13:28
jnthn Taking a look at the leak reported in github.com/rakudo/rakudo/issues/2803 13:30
lizmat commented on that ticket 13:37
timotimo lizmat: i think the code snippets lack the initial send? 13:38
jnthn lizmat: Not sure what yuou mean; I get loads of output? 13:39
lizmat: did you accidentally remove the initial .send(0)?
timotimo hm, in an earlier comment there's a $channel.send: 0 inside the LEAVE of a react block; do we actually fire the LEAVE phaser when the react is still going?
i mean, they say it gives output, so i suppose it has to
lizmat ah, with in initial send it does :-)
jnthn timotimo: I'd expect it to fire right away when the react block's setup run is completed
timotimo OK, good 13:40
for the other thing there's another phaser
jnthn *sigh* I tried to --profile it to see what it's allocating, and it SEGVs almost immediately :(
(The second version with the .tap, that is) 13:41
timotimo oh, ouchies
jnthn Thread 1 "moar" received signal SIGSEGV, Segmentation fault.
0x00007ffff76f17ec in MVM_profiler_log_gc_deallocate ()
from /home/jnthn/dev/MoarVM/install/bin/../lib/libmoar.so
Will do a debug build to get line numbers 13:42
timotimo oh, i didn't expect that to be explosive
feel free to comment out the call to that function
lizmat cycling&
timotimo hm, another object with a nulled out stable pointer 13:43
jnthn 0x00007ffff76f126c in MVM_profiler_log_gc_deallocate (tc=tc@entry=0x604a60,
object=object@entry=0x326a4c0) at src/profiler/log.c:293
293 MVMObject *what = STABLE(object)->WHAT;
Guest16965 jnthn: interesting
timotimo on my end it has its flags = 16, which is only SECOND_GEN
i see that the line below that in MVM_gc_collect_free_gen2_unmarked guards against STABLE(obj), i.e. skips objects with nulled out stables 13:44
what kinds of objects are those?
jnthn Hm, when is this being called? 13:45
timotimo during a full collection
well, after, really
when we go through the entire gen2 to rebuild the free list and call ->free on all the objects that have a free function in their REPR
jnthn Yeah, but I think we chain a free list through the gen2 pages or something
timotimo yeah, we do
jnthn Ah, the collection also guards against a null STable 13:47
Note that you're missing the call to profile in the case of dead oversize objects 13:49
timotimo oh, good catch
Geth MoarVM: d68b57580b | (Jonathan Worthington)++ | src/profiler/log.c
Guard against objects with null ->st

It's not entirely clear when this situation happens, however the GC code itself also guards against that case.
13:51
timotimo i'll be AFK for a bit, but it does look like things are working again 13:52
of course you'll need a proper termination for the profiler data to be taken and outputted
jnthn yeah, already did that
timotimo what's up in the moarperf repo should currently be totally usable 13:53
jnthn grmbl...even though I ran `zef upgrade App::MoarVM::HeapAnalyzer`, moar-ha on my snapshot blows up immediately 14:04
Considering the snapshot...oops!
Backtrace(27 frames)
Cannot unbox 66 bit wide bigint into native integer
14:07 lucasb joined
jnthn ah, though maybe it doesn't cope too well with me having Ctrl+C'd it... 14:08
ah, seems so
Yeah. I thought I remembered some discussion to the effect of it wrote the format so you could get away with doing that, and then read the records that were written, but maybe that was a speculated feature? :) 14:10
timotimo yes, sorry, that was still speculation :( 14:23
it should be possible to attach gdb to it and manually call MVM_profile_heap_end 14:24
perhaps an "atexit" would have been a good idea for this actually
14:31 squashable6 left 14:33 squashable6 joined 15:52 zakharyas left
AlexDaniel I'd love to see github.com/MoarVM/MoarVM/pull/1072 merged before the release 16:00
maybe with a tweak to the pad argument, or maybe it's good as is…
bugs.ruby-lang.org/issues/15667 16:01
it looks like they use 0 16:02
16:13 Kaiepi left, Kaiepi joined 17:10 domidumont left 17:20 domidumont joined
Geth MoarVM: 25b486dee8 | (Jonathan Worthington)++ | 4 files
Avoid preserving ->caller unless we really need it

We need it for backtraces and context introspection. Otherwise, it can go. Preserving it can cause leaks involving taken closures: the taking of the closure has an ->outer frame that in turn points to a ->caller and keeps it alive. This was involved in the leak that was reported in
  github.com/rakudo/rakudo/issues/2803.
17:29
timotimo in that bug there's another interesting tidbit: we can also just NativeCall into malloc_trim if the user Knows What They Want™
17:30 domidumont left 17:50 Kaiepi left 17:51 Kaiepi joined
jnthn AlexDaniel: Yeah, I'll experiment a bit with it next week :) 17:51
AlexDaniel: Interestingly, that leaking example looks stable rather than leaky under massif, which does suggest we might be seeing fragmentation problems 17:52
17:52 Kaiepi left
jnthn We can often end up with a lot of memory allocated on one thread and then released on another, and maybe that's not always handled too well 17:53
dinner, bbl
17:53 Kaiepi joined 20:46 domidumont joined 20:53 domidumont left 21:10 releasable6 left 21:14 releasable6 joined 21:56 Kaypie joined 21:58 leedo left 21:59 harrow` joined 22:00 samcv_ joined 22:01 Voldenet left, samcv left, Ulti_ joined, harrow left, Ulti left, Kaiepi left 22:04 Voldenet joined, Voldenet left, Voldenet joined 22:09 leedo joined 23:25 AlexDaniel left 23:29 AlexDaniel joined 23:59 greppable6 left