github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
nwc10 good *, #moarvm 06:54
timotimo there may not have been a geth announcement, but i just pushed some stuff 11:16
MasterDuke timotimo: to where? 11:27
timotimo: btw, any thoughts on github.com/perl6/nqp/pull/532 ? 11:30
timotimo github.com/MoarVM/MoarVM/commit/b1...bc1b1ec9b4 11:31
(and the commits before and after that)
timotimo invested a little bit of time into making the heap snapshot profiler faster 12:23
the pauses are still definitely noticable
hmm. if there were an additional thread, it could perhaps offload writing the file out onto it 12:24
MasterDuke: could you put in a little heuristic that selects instrumented when the filename ends in .html, .json, or .sql and heap when it ends in .mvmheap? 12:28
i should perhaps put that on the ticket 12:31
timotimo latest piece of fun is that it's crashing while trying to free a heap snapshot while doing a long-running process :| 15:16
oh, this time it was segfaulting while gc-marking some hash 15:18
gc-marking while doing the heap snapshot recording, that is 15:19
nine In case anyone is tempted to do a MoarVM release, please wait 16:52
Seems like another regression crept in, killing reproducibility of build. 16:53
jnthn nine: Is there any way we can write a test in say, the Rakudo `make test`, that would catch some such cases? 16:56
It seems it's an easy thing to regress without noticing
timotimo can we get something on CI that checks if builds are reproducible? 17:00
jnthn Well, if we write a test, it will be on CI :) 17:01
dinner & 17:02
nine jnthn: I guess so, yes
timotimo someohw i managed to not read what jnthn wrote there 17:03
travis-ci MoarVM build errored. Jonathan Worthington 'Better error handling of process stdin writes 17:11
travis-ci.org/MoarVM/MoarVM/builds/525038883 github.com/MoarVM/MoarVM/compare/2...1779ac4e7f
nine jnthn: it's 146ede292 (HEAD, refs/bisect/bad) Start explicitly marking out mixin types 17:29
Ah! I'm pretty sure it's the lexicals to locals debug map 17:35
timotimo oh, is it hash-ordering-dependent? 17:47
nine yes 17:49
github.com/MoarVM/MoarVM/commit/8e...d3b649d666 17:51
Fix lexical to locals debug_map getting written to bytecode files in …
…random order
Will maybe need a rebootstrap to have a real effect, but probably not 17:52
Alas, there seems to be yet another issue still 17:58
travis-ci MoarVM build passed. Stefan Seifert 'Fix lexical to locals debug_map getting written to bytecode files in random order 18:07
travis-ci.org/MoarVM/MoarVM/builds/525059189 github.com/MoarVM/MoarVM/compare/7...b232dbcc11
dogbert17 timotimo: interested in a gist with valgrind output? 21:00
if so take a look at this: gist.github.com/dogbert17/035adc66...12a8e71783 21:01
timotimo oh, huh, that's very odd 21:02
these arrays that the profiler is creating, they should all be rooted properly 21:05
i might have found the problem 21:10
timotimo dogbert17: could you go to src/profiler/instrument.c and switch lines 827 and 829? 21:11
dogbert17 sure
i.e. MVM_gc_worklist_add... before mark_gc_entries... if I understand you correctly 21:13
timotimo yes 21:16
the code as it was until now would put the collected data (i.e. what you get back after the profiler finishes) into a worklist, but only after it was used, immediately before it was freed 21:17
so it never mattered
so the root object that all data was pushed into by the instrumented profiler data dumper would go into an array that had already been considered dead and freed, or something like that 21:18
dogbert17 timotimo: it doesn't seem to be enough :( gist updated, scroll down a bit 21:22
hopefully I did something wrong 21:25
timotimo i don't think you did :( 21:27
i'm not sure i understand how the order of events goes in this :\ 21:29
oh 21:32
i ... might know what this could be about
yeah 21:34
ok, so here's what i think happens
the whole structure is rooted just fine 21:35
however
it gets created after everything has been marked, but before unmarked gen2 stuff gets freed
so the stuff that gets created for the whole datastructure doesn't get the "was seen" bits set 21:36
OK, want to try another thing? 21:37
dogbert17 timotimo: can do 21:38
timotimo in src/gc/orchestrate.c i moved the lines MVM_profile_dump_instrumented_data(tc); and MVM_profile_heap_take_snapshot(tc); to line 248, which is before "mark thread free to continue" 21:39
dogbert17 sure, should I revert the earlier fix while I do this or do you want to keep it? 21:40
timotimo no, keep that one, too
dogbert17 ok
timotimo it may end up not mattering whether that line is even there at all, but the way i had you change it was definitely less wrong 21:41
dogbert17 the rows that I should move are they around line 190? 21:44
timotimo yep
dogbert17 preliminary findings are interesting, the valgrind error seems to have disappeared 21:49
but the following message is now shown instead: Writing profiler output to profile-1556315291.2307668.html - Some exceptions were thrown in END blocks: X::AdHoc: This representation (VMArray) does not support associative access (for type BOOTArray) in any <main> at /home/dogbert/repos/rakudo/perl6.moarvm line
timotimo oh 21:54
i wonder if i accidentally put something in moar but not in nqp?!
i know the profiler is very annoying to get backtraces for :( :(
oh, actually
it's probably easy to gdb the process, set a breakpoint for the line that throws that adhoc exception there, and then call MVM_dump_backtrace 21:55
dogbert17 I could try that 21:56
it seems the be in the interestingly named function 'static void die_no_ass' 21:58
timotimo %)
that'd be the one
dogbert17 recompiles moarvm with --no-optimize 22:00
timotimo: gist updated 22:03
timotimo can you find out what the key is? 22:06
should be able to see it when you print *key, i'd think
also, what's line 512 in your stage2/NQPHLL.nqp? 22:08
dogbert17 (gdb) p *key - $2 = {header = {sc_forward_u = {forwarder = 0x0, sc = {sc_idx = 0, idx = 0}, st = 0x0}, owner = 1, flags = 16, size = 48}, st = 0x66dcb0} 22:10
timotimo oh, is that just a MVMObject?
in that case, p *((MVMString *)key)
dogbert17 as for line 512: 511 # Post-process the call data, turning objects into flat data. 512 for $data { 513 if nqp::existskey($_, "call_graph") { 22:12
timotimo can you put an nqp::ishash($_) && in front of that, just to see what happens? 22:13
dogbert17 (gdb) p *((MVMString *)key) - $1 = {common = {header = {sc_forward_u = {forwarder = 0x0, sc = {sc_idx = 0, idx = 0}, st = 0x0}, owner = 1, flags = 16, size = 48}, st = 0x66e0c0}, body = {storage = {blob_32 = 0x1b70060, 22:15
blob_ascii = 0x1b70060 "call_graph\215\004", blob_8 = 0x1b70060 "call_graph\215\004", strands = 0x1b70060, any = 0x1b70060}, storage_type = 2, num_strands = 0, num_graphs = 10,
cached_hash_code = 14322284749794657829}}
timotimo right-o, so that's in fact that line 22:15
dogbert17 change to this? if nqp::ishash($_) && nqp::existskey($_, "call_graph") { 22:19
timotimo yeah 22:25
dogbert17 I guess I will have to build and install nqp after that change 22:27
timotimo yes, and rakudo, too 22:28
sadly ...
dogbert17 ah 22:29
timotimo i'll be afk for a little bit 22:31
dogbert17 ok 22:32
doing the change in 'stage2/NQPHLL.nqp' was a mistake :) 22:34
timotimo oh, i should have said that 22:39
dogbert17 if I did this correctly (I might have) the error is still present but the stacktrace looks different 22:41
at gen/moar/stage2/NQPHLL.nqp:220 (/home/dogbert/repos/rakudo/install/share/nqp/lib/NQPHLL.moarvm:post_process_thread_data)
from gen/moar/stage2/NQPHLL.nqp:513 (/home/dogbert/repos/rakudo/install/share/nqp/lib/NQPHLL.moarvm:dump_instrumented_profile_data)
timotimo i think i'll look more closely at this tomorrow 22:42
we already got a lot further, that's nice
dogbert17 thanks, I also need a nap
good night and sleep well 22:43
timotimo same to you 22:52