01:06
colomon joined
02:22
colomon joined
07:04
domidumont joined
07:09
domidumont joined
09:17
Ven joined
11:06
Ven joined
11:52
colomon joined
12:01
vendethiel joined
12:04
colomon joined
12:45
Ven joined
12:57
colomon joined
13:15
zakharyas joined
13:24
colomon joined
13:39
colomon joined
14:06
ZoffixWin joined
|
|||
ZoffixWin | Hey. Someone told me I need some sort of flag to enable before attempting to use valgrind, because I'd get false positives otherwise.... what's that flag and how do I set it? | 14:07 | |
timotimo | you just have to run moar with --full-cleanup | 14:08 | |
ZoffixWin | I actually see perl6-valgrind-m in rakudobrew bin | 14:15 | |
timotimo | yeah | ||
but it doesn't set --full-cleanup | |||
because nobody put it in yet | |||
*hint hint* | |||
ZoffixWin | But I've no idea how to put in --full-cleanup. All these scripts are some sort of a Perl 5 script :/ | 14:16 | |
nm | 14:17 | ||
timotimo | just cat it and run it by hand | 14:18 | |
ZoffixWin | Adding --full-cleanup produces segmentation fault: gist.github.com/zoffixznet/c9951c3...2bef733f73 | 14:24 | |
timotimo | where is that segmentation fault? :o | 14:26 | |
aaw, you have your moar built without --debug | 14:27 | ||
ZoffixWin | Oh | ||
timotimo | turns out we destroy the nfg stuff after the fixed size allocator got destroyed | 14:28 | |
so when i move fsa_destroy after nfg_destroy, it should not give you any errors any more. | |||
timotimo tries it out | 14:29 | ||
great, it doesn't give any errors any more | |||
ZoffixWin has no idea what any of that means... | 14:30 | ||
dalek | arVM: f98914f | timotimo++ | src/moar.c: the NFG uses the FSA, so have to destroy FSA after NFG. |
||
timotimo | :) | ||
just that the error was helpful and very simple | |||
but also rather harmless | 14:31 | ||
ZoffixWin | So, do I still need to rebuild with --debug to get --full-cleanup to work? | ||
Or is the fix you just pushed the fix for the segmentation fault? | |||
timotimo | no, but it'll give better tracebacks in valgrind | ||
it fixes the errors you got, yeah | 14:32 | ||
was it really a segmentation fault? | |||
ZoffixWin | I see 'Segmentation fault' on this line: gist.github.com/zoffixznet/c9951c3...e-bash-L20 | ||
timotimo | oh! haha | ||
that's just for saying the banner up top | |||
that one didn't need --full-cleanup, but it's probably fixed now | 14:33 | ||
thanks for the tip in any case :) | 14:35 | ||
14:51
Ven joined
|
|||
dalek | arVM: be10119 | timotimo++ | src/moar.c: close dynvar log filehandle in instance destroy |
15:29 | |
arVM: e240e75 | timotimo++ | src/moar.c: cross thread write logging has a mutex. close it. |
|||
timotimo | ^- not worth a lot, but cheap to do | ||
the heap collection subsystem doesn't offer a way to destroy the data without creating the output yet; we might want to have a "cancel heap snapshot mode" function that just destroys the heap snapshot data that belongs to an instance | 15:37 | ||
16:06
cognominal joined
17:44
cognominal joined
|
|||
moritz | www.reddit.com/r/ProgrammingLangua...le_target/ looks like it could do from some input from MoarVM deleopers :-) | 19:04 | |
erm, that no grammar | 19:05 | ||
some input would be beneficial to that thread :-) | |||
geekosaur | s/do from/do with/ ? | 19:07 | |
timotimo | well, we only use dynasm, which is a shared dependency between our stuff and luajit's stuff now | 19:08 | |
the luajit IR isn't really similar to what we target, i don't think | |||
moritz | oh | 19:09 | |
jnthn | Also, luajit doesn't use dynasm for its JIT, afaik, just for its hand-coded interpreters. | ||
(I think this is true of luajit 2, and luajit 1 did use dynasm for its JIT also) | 19:10 | ||
timotimo | oh, interesting | 19:12 | |
19:37
zakharyas joined
|
|||
timotimo | MadcapJake: do note, however, that the multi-core version spends about 50% total time contending for a lock on one of our allocators | 21:56 | |
MadcapJake | could you elaborate a bit? Is this the $input-channel.list that you're speaking of? | 21:58 | |
timotimo | no | ||
just internal stuff. the thing in question here is the Fixed Size Allocator, which we currently use to allocate every single call frame | 21:59 | ||
so basically, the threads are fighting to be allowed to invoke subs | 22:00 | ||
if we could be invoking less stuff, i.E. have fewer curlies or somehow make the optimizers better able to inline stuff, it wouldn't be as bad | |||
alternatively, we wait for jnthn's current refactoring work. on this particular code, the partial refactor still crashes. | |||
MadcapJake | interesting! Nice to hear the current refactor will address this. Though I'm not quite grasping how. | 22:05 | |
timotimo | want me to explain what jnthn is up to? | 22:14 | |
his own blog posts can probably explain better than me, but i can answer questions | 22:15 | ||
MadcapJake | i've read them actually, but I've not quite comprehended them :) isn't the big idea that call frames will be gc-able and reference counting will go away? (probably way off on that :P) | 22:21 | |
timotimo | right, but with the added thing that whenever we know for a fact that a bunch of frames will all be ref-counted-1, we can have them in a cheaper storage than GC-managed | 22:28 | |
and only when a frame escapes, it'll be migrated to GC storage | |||
that'll also likely mean we'll be allocating things in a per-thread rather than shared pool | 22:30 | ||
meaning all the contention in multi-threaded situations will also go away | |||
MadcapJake | wow nice! | 22:31 | |
22:49
diakopter joined
|
|||
timotimo | well, of course not all, as in all | 22:53 | |
but the big one will be gone. the one where every new call frame has to go through one central resource | 22:54 |