github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:00 reportable6 left 00:03 reportable6 joined 00:09 pamplemousse left 00:12 pamplemousse joined 00:15 lucasb left 00:31 pamplemousse left 00:32 pamplemousse joined 00:46 pamplemousse left 01:51 Voldenet left 01:55 Kaiepi left 01:56 Voldenet joined, Voldenet left, Voldenet joined 02:52 Voldenet left 02:57 Voldenet joined, Voldenet left, Voldenet joined 03:31 Kaiepi joined 05:54 robertle left 05:56 scovit left 05:57 sena_kun joined 05:58 sena_kun left 06:00 reportable6 left 06:01 reportable6 joined 06:18 scovit joined 07:01 sena_kun joined, sena_kun left 07:32 lizmat joined 07:49 zakharyas joined 08:02 lizmat left 08:03 lizmat joined 08:14 lizmat left 08:20 lizmat joined 08:28 lizmat left 08:50 lizmat joined 09:12 lizmat left 09:13 lizmat joined 09:28 lizmat left 11:03 sena_kun joined 11:25 zakharyas left 12:00 reportable6 left 12:05 reportable6 joined 12:55 lucasb joined 12:56 zakharyas joined 13:24 pamplemousse joined 15:03 sena_kun left 15:56 pamplemousse left 16:22 Kaiepi left 16:23 Kaiepi joined, pamplemousse joined
ugexe you'd create a Distribution::Foo.new(file = $elf-file) that allows one to do $dist.content("lib/MyModule.pm6").open.slurp, and $dist.meta. Then you create a CompUnit::Repository that can take a query (the `MyModule` of `use MyModule`) and return the previously mentioned distribution if it exists 16:56
github.com/ugexe/Perl6-CompUnit--R...itory--Tar this might be somewhat helpful in that it loads bytecode from stdout of a tar process 16:57
(note CompUnit::Repository-s are mostly boilerplate) 16:58
handling %?RESOURCES won't be possible the way it is currently implemented though (it requires that %?RESOURCES<foo>.IO.slurp works, instead of working like distribution with $dist.content("foo")) 16:59
in the tar example it cannot simply pipe the data in... it *must* be written to a file somewhere (and this is a flaw, not a design choice) 17:00
s/simply pipe the data in/simply pipe the data in for resources/ 17:01
17:48 zakharyas left 17:52 pamplemousse left 18:00 reportable6 left 18:03 reportable6 joined
dogbert17 .seen timotimo 18:05
timotimo yo 18:08
dogbert17 hello, wanna contemplate a mystery? 18:09
I was inspired by a post from vrurg, gist.github.com/dogbert17/1de3deeb...b8389558dd
moritz what came first, the bug or the debugger? :D
dogbert17 there seems to be a heap of problems :) 18:10
18:13 chloekek joined
timotimo i wonder if anything's changing the speshlog's contents while the heap snapshot is being taken 18:14
can you build this with ASAN? i'd expect it to not break under valgrind because of the single-threaded nature of valgrind 18:15
dogbert17 I can
I added ASAN output at the end of the gist 18:17
timotimo OK. in the debugger, what does body look like? 18:18
dogbert17 sec 18:19
timotimo and i is assumedly 0?
dogbert17 ok, update the gist again 18:21
anything suspicious jumping out? 18:22
entries is indeed 0
timotimo yeah, entries is null, but there's a null check for that right before the for loop for that switch 18:23
does the thread entry in the body look sane?
can you double-check IS_CONCRETE((MVMObject *)0x42db710) please? 18:24
dogbert17 (gdb) p IS_CONCRETE((MVMObject *)0x42db710) 18:25
Cannot access memory at address 0x42db71c
interestingly, the loop variable i = 4685 18:28
timotimo what the what
in "info threads" what does the spesh thread look like at that moment? 18:29
probably want a "threads apply all bt" on top of that
dogbert17 * 1 Thread 0x7ffff7fdb700 (LWP 2786) "moar" 0x00007ffff7593299 in describe_refs (tc=0x604a70, ss=0x7fffffffbf30, st=0x6704f8, data=0x2451a48) at src/6model/reprs/MVMSpeshLog.c:69
2 Thread 0x7ffff61d5700 (LWP 2790) "moar" pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
could a gc take place while the loop is running? 18:31
timotimo the loop inside describe_refs? 18:32
should be impossible; heap snapshot taking is part of the GC
and if there's only those two threads, and one is sleeping, not much can happen 18:33
dogbert17 aha, the loop starting on line 68, that counter is 4685 when the crash occurs 18:34
timotimo and entries is null, yeah?
dogbert17 yup
timotimo how often to run until it crashes? 18:35
dogbert17 perhaps 5 times on my system, if it doesn't crash almost immediatly I brak the program and try again
*break
timotimo i got a crash, but this time it's in get_collectable_idx 18:36
but also i've switched branches in moarvm and didn't recompile
dogbert17 is describe_refs part of the call stack? 18:37
timotimo i had already killed gdb 18:38
ah, yes, MVMStaticFrameSpesh's describe_refs is on the stack 18:42
18:46 chloekek left
timotimo well that is kind of odd 18:50
dogbert17 ? 18:51
timotimo the callsite this stat entry's describing allegedly has 56 flags
18:52 Kaiepi left
timotimo but the flags pointer is a null pointer 18:52
dogbert17 interesting 18:53
timotimo i'll go in with "rr record" and hope i get another crash on tape
dogbert17 it will be interesting to hear what rr can accomplish given the amount of hype you've given it :) 18:54
18:55 Kaiepi joined
timotimo first of all i'll have to reproduce the crash while it's in rr 18:59
took me a couple of runs last time
got one 19:00
dogbert17 yay
timotimo i wonder why you get crashes in one spot and i do in another 19:01
i'd rather investigate the crashes in your spot, it's not a five layers nested for loop :D
$10 = {bytecode_offset = 671266272, num_types = 21233, 19:03
those values look odd
dogbert17 a bit high
have you built with --no-optimize? 19:04
timotimo no 19:05
i probably should have, though
dogbert17 there's no chance that the compiler might, for some odd reason remove/optimize, 'if (!body->entries) return;' 19:06
MasterDuke dogbert17: you could put a printf before the return and see if that changes anything 19:07
timotimo that'd be very bad 19:09
dogbert17 so why is the loop entered if entries is 0?
MasterDuke: still satisfied with your new machine? 19:12
19:13 chloekek joined
MasterDuke dogbert17: yep, makes we want to just find things that were slow before and try them now, even if there's no need 19:13
i probably should upgrade the ssd though, it's about as old as the other hardware and pretty small 19:14
dogbert17 I used an old 80Gig Intel G2 for many years 19:15
timotimo MVM_spesh_stats_cleanup frees the memory where later the profiler is trying to read data
dogbert17 did rr tell you this? 19:16
timotimo yes
and that's happening in the middle of a GC run
that's fun!
MasterDuke mine is an OCZ-AGILITY3, only 90gb 19:17
19:17 pamplemousse joined
timotimo there's 30 to 50 spesh stats in my yard and they're eating my GC runs =o 19:17
dogbert17 cool, so you found the problem
at least ssd prices are getting lower 19:19
timotimo yeah they are 19:20
a whole lot
MasterDuke but now that i have a mb that supports pcie 4.0, i'm forced to wait for ssds that support it 19:22
timotimo you are? 19:24
your new motherboard has pcie4 but not m.2? :) 19:25
MasterDuke it has m.2 also. i'm "forced" to wait 19:27
timotimo oh 19:29
mine doesn't have m.2, also no ddr4
MasterDuke i could of course get something much faster and larger than what i have now for pretty cheap, but since the mb does support pcie 4.0, i feel compelled to use it. especially since i probably won't upgrade whatever i get for a long time, so the pcie 4.0 might in fact go to "waste" 19:35
timotimo blergh, i can't properly step in the spesh worker because of the MVMROOT macros 19:36
MasterDuke i guess you'd have to temporarily replace them with explicit *_push()/_pop()? 19:38
timotimo yep, i just did 19:39
i'll also throw out the actual writing of the heap snapshot to make it faster
and set optimize down to 0 19:41
well, color me confused 19:58
i'm getting a double-free of a spesh log's entries array
free_nursery_uncopied destroyed the spesh log before 19:59
nope, forgot to pop the roots 20:02
it's disgustingly warm here again 20:03
also it looks like telemeh doesn't actually work at the moment? 20:24
ooh 20:27
it's the perl6 runner that didn't pick up HAVE_TELEMEH
that's inconvenient 20:28
could it pls crash 20:52
20:55 chloekek left
timotimo finally 20:57
yeah, spesh is totally running while the gc is doing the heap snapshot stuff 20:58
shame on you, spesh worker
dogbert17 can you stop it? 21:04
timotimo it's not supposed to be doing this 21:13
21:13 pamplemousse left 21:14 pamplemousse joined 21:17 MasterDuke left
timotimo i added a load more output to the telemetry log 21:18
regarding gc orchestration 21:19
m: say (714798646 R- 714895101) / 3392367279.199565 21:22
camelia 0.0000284329472789
21:24 lizmat joined 21:28 pamplemousse left
timotimo darn it 21:32
without telemeh it crashes rather quickly
unbelievable! a crash! 21:41
21:53 pamplemousse joined 22:14 pamplemousse left 23:46 lucasb left