00:54
patrickz joined
00:59
evalable6 joined
01:11
jpf joined
01:55
AlexDaniel joined
01:58
bisectable6_ joined,
bisectable6 joined
02:58
ilbot3 joined
03:04
yoleaux joined
04:12
harrow` joined,
SourceBaby joined
04:56
TimToady joined
06:04
releasable6 joined,
greppable6 joined
06:05
squashable6 joined
07:10
geospeck joined
07:14
domidumont joined
07:21
domidumont joined
07:30
brrt joined
07:31
brrt left
07:35
reportable6 joined
08:32
Ven`` joined
08:45
zostay joined
09:01
zakharyas joined
09:03
zakharyas joined
09:44
geospeck joined
10:15
brrt joined
|
|||
samcv | argh gonna start over again. well sorta. i'm doing a rework of a bunch of point stuff to ucd2c.pl | 10:17 | |
but i've made so many changes and i need to go back to master and then iteratively do it. and not do it on top of adding faux properties for Bidi_Matching_Bracket, i can add that back in later | 10:18 | ||
frustratingly there is a global $first_point and $last_point variables. but also many subs ALSO have local variables of the same name | |||
ARGH | |||
so first i'm going to rename them with titlecase and work from there, since i think that was what ended up going wrong. not realing certain things were local/not | 10:19 | ||
brrt | good * #moarvm | 10:21 | |
samcv | i'm thinking of changing all globals to titlecase to make it obvious | 10:23 | |
10:37
AlexDaniel joined
11:19
domidumont joined
11:43
brrt joined
|
|||
brrt | i always just use $UPPERCASE for globals | 11:47 | |
samcv | hmm that's a choice too | 11:48 | |
yoleaux | 11:05Z <tbrowder> samcv: thanks! | ||
dogbert17 | what does this mean: Can NOT inline to (43) into Str (46): target has a :noinline instruction | 11:49 | |
looks a bit cryptic to the uninitiated :) | 11:53 | ||
jnthn | The method to couldn't be inlined into the method Str because there's an instruction that we're not allowed to inline inside of to | 11:58 | |
Maybe we should stick quotes around the names :) | |||
dogbert17 | aha, thx for the explanation, just testing the define in optimize.c in order to see what it outputs | 12:01 | |
what does the numbers, i.e. 43 and 46, mean? | 12:02 | ||
jnthn | They're ids | 12:09 | |
Useful if needed to try and differentiate different multi candidates with the same name, for example | |||
They can be looked up in the spesh_log | |||
dogbert17 | thx | 12:12 | |
the most common case for not inlining seems to be that the bytecode is too large. I guess there's a reason why the limit is set the way it is (384). Diminishing returns perhaps? | 12:13 | ||
timotimo | once a frame is gigantic, setting up a call record is the smallest part of executing the thing | 12:14 | |
but if you have only five instructions, setting up the call record and tearing it down for returning takes a significant amount of the total time | |||
dogbert17 | aha, so 384 was chosen after some tuning procedure then? | 12:15 | |
timotimo | aye, perhaps for core setting compilation? | 12:16 | |
dogbert17 | also, once the reason, for not inlining was given as 'different HLLs'. What does that mean? | ||
dogbert17 asks a lot of noob questions atm | 12:17 | ||
timotimo | a hll has, among other things, a set of named classes that fulfill some role | 12:18 | |
like "slurpy array" | |||
if you pass a simple array from a perl6 function down into an nqp function - or vice versa - it will be converted | |||
we do that by calling "hllize" on the thing, which looks at "the current frame's hll" | |||
if we inline an nqp thing into a perl6 thing, we don't have "the current frame's hll" any more | 12:19 | ||
though we could conceivably turn "hllize" in an inlinee into "hllizefor" maybe? | |||
or store what hll something is in the inlines table, then we'd have to check for hlls there whenever we look that up | 12:21 | ||
not sure if it's worth it; what cases do you see where that happens? | |||
dogbert17 | and hll does not mean High Level Language or ? | ||
Can NOT inline resolve (150) into grep-callable (1734): different HLLs | |||
timotimo | i wonder what that is | 12:22 | |
dogbert17 | the src has one grep, i.e. 'my $factors = (1..floor(sqrt($sum))).grep( -> $x { $sum % $x == 0} ).elems * 2;' | ||
timotimo | you should be able to find resolve in the speshlog and figure out what file and line it comes from | 12:24 | |
dogbert17 | hmm, that log is monstruous :) | 12:27 | |
timotimo | of course :D | ||
spesh does a whole lot of work | |||
dogbert17 | could it be 'Latest statistics for 'resolve' (cuid: 150, file: src/Perl6/World.nqp:3796)' | 12:28 | |
timotimo | looks good, name and cuid both match | ||
"create_lexical_capture_fixup", eh? | |||
i won't pretend i know how that stuff works :) | 12:29 | ||
dogbert17 | how are your wrists today? | 12:31 | |
timotimo | ungood, but not plusungood | ||
dogbert17 | is there anything you can do yourself in order to improve thing, e.g. some kind of training | 12:32 | |
*things | |||
timotimo | i can wear my braces and not use my hands at all | ||
dogbert17 | is that the recommended way | ||
sounds awfully limiting | 12:33 | ||
timotimo | 'tis | ||
dogbert17 | and for how long are you supposed to do this? | 12:35 | |
timotimo | until i feel better i guess | ||
i ought to get another appointment with the doc | |||
dogbert17 | argh | ||
13:23
geospeck joined
13:36
domidumont joined
13:42
domidumont1 joined
13:56
zakharyas joined
|
|||
dogbert17 | jnthn: did you ever have time to look at github.com/MoarVM/MoarVM/issues/680 from MasterDuke? | 14:24 | |
15:05
geospeck joined
|
|||
dogbert17 | anyway I have looked a bit at it and even run it through the profiler, i.e. --profile. If the generated output is correct (?) the reason for the leak seems to be in plain sight | 15:05 | |
the following line in the profiler output looks suspicious to me: 'The profiled code ran for 424326.62ms. Of this, 0ms were spent on garbage collection and dynamic optimization (that's 0%).' | 15:07 | ||
and 'The profiled code did 0 garbage collections. There were 0 full collections involving the entire heap. ' | |||
timotimo | one of the problems comes from a multi-threaded program being --profile'd | 15:09 | |
the other is from the spesh thing moving to its own thread and not being accounted for in the profile timing any more | |||
dogbert17 | does that explain the lack of GC's as well? | 15:10 | |
timotimo | yeah, that's multithreaded profiling stuff, too | ||
dogbert17 | the interesting thing is that according to valgrind there aren't many leaks bet there's plenty of 'reachable' memory, several hundred megs | 15:11 | |
*but | |||
timotimo | in these cases, a heap dump profile might help | 15:12 | |
dogbert17 | e.g. like this | ||
==31919== 128,604,456 bytes in 118 blocks are still reachable in loss record 3,205 of 3,205 | |||
==31919== at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) | |||
==31919== by 0x41A90BF: MVM_malloc (alloc.h:2) | |||
==31919== by 0x41A90BF: MVM_string_join (ops.c:1843) | |||
==31919== by 0x40EA526: MVM_interp_run (interp.c:1641) | |||
timotimo | ah? | 15:13 | |
i wonder if we're being silly with ropes | |||
dogbert17 | how silly :) | ||
timotimo | like, we're making single-character substring ropes out of every 4 kilobyte block and we keep those single characters around, not realizing each one holds on to 4k of data | 15:14 | |
not necessarily the case, but certainly possible | 15:15 | ||
dogbert17 | MasterDuke's example contains the line 'output => @chunks.join.chomp,' which I suspect contains all rakudo commits | 15:16 | |
timotimo | i don't see anything like that there, though | ||
dogbert17 | odd | ||
there's other stuff as well (spam warning) | 15:17 | ||
==31919== 78,069,964 bytes in 3,616 blocks are still reachable in loss record 3,204 of 3,205 | |||
==31919== at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) | 15:18 | ||
==31919== by 0x4197456: MVM_malloc (alloc.h:2) | |||
==31919== by 0x4197456: MVM_string_utf8_decodestream (utf8.c:350) | |||
==31919== by 0x4193B93: run_decode (decode_stream.c:115) | |||
==31919== by 0x4193B93: MVM_string_decodestream_get_available (decode_stream.c:492) | |||
==31919== by 0x4160116: MVM_decoder_take_available_chars (Decoder.c:245) | |||
==31919== by 0x40E4255: MVM_interp_run (interp.c:5121) | |||
MasterDuke | dogbert17: have you used heaptrack before? i find it helpful for memory related stuff | 15:24 | |
dogbert17 | nope, I have no tbh | ||
timotimo | ok, so the second part is where we directly re-use the buffer from the decoder as the string's buffer without copying | ||
the first one, surprisingly the bigger one, seems to be about strands arrays being created? | 15:26 | ||
dogbert17 | timotimo: I saved a gist of the biggest leaks here: gist.github.com/dogbert17/cdf6f54e...77b909c9e1 | ||
MasterDuke | dogbert17: i'd be curious if you got similar results with my "use the fsa for string storage" branch? | 15:27 | |
dogbert17 | MasterDuke: I can try | 15:28 | |
15:28
zakharyas joined
|
|||
MasterDuke | github.com/MasterDuke17/MoarVM/tre...ng_storage | 15:28 | |
it's rebased to head, and passes rakudo spectest | 15:29 | ||
dogbert17 | oh, so it's not a branch in the MoarVM project. So how do I use it then (is no git virtuoso)? | 15:36 | |
timotimo | you can "git remode add masterduke [email@hidden.address] then "git fetch masterduke" and "git checkout use_fsa_for_string_storage" should give you that branch | 15:37 | |
dogbert17 tries | 15:38 | ||
MasterDuke | s/remode/remote/ | ||
dogbert17 | Permission denied (publickey). | 15:39 | |
timotimo | ok, then use | ||
wait | |||
or maybe you have to put .git at the end | |||
or perhaps upper/lower case matters there? | |||
eater | dogbert17: use github.com:masterduke17/moarvm | 15:40 | |
your public key is not added ot github? | |||
dogbert17 | don't have a public key | 15:42 | |
thx guys, got it working, building now | 15:46 | ||
MasterDuke: to me it seems as if the memory usage is more or less the same | 15:50 | ||
MasterDuke | hm, i guess if it's allocating large strings that makes sense | 15:53 | |
dogbert17 | your code example, i.e. the leak is fascinating, after 50 iterations, on a 32 bit system, I get: {:data("558.980 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("530.708 kB"), :share("26.484 kB"), :size("588.304 kB"), :text("8 kB")} | 15:54 | |
if I add the following lines as the two last statements in the loop '@commits = Empty; @tags = Empty;' memory usage changes, after 50 iterations, to: {:data("344.848 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("320.000 kB"), :share("26.472 kB"), :size("374.172 kB"), :text("8 kB")} | 15:56 | ||
MasterDuke | any difference if you add an nqp::force_gc in the loop? | ||
dogbert17 | sec | ||
timotimo | huh, the for loop is holding on to the result from get-output? | 15:59 | |
if you only empty out commits, not tags, does that make the same difference? | |||
dogbert17 | timotimo: will check | 16:00 | |
MasterDuke: your original code with force_gc in the loop: {:data("566.032 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("540.540 kB"), :share("26.484 kB"), :size("595.356 kB"), :text("8 kB")} | |||
MasterDuke | looks about the same | 16:02 | |
dogbert17 | timotimo: setting @commits = Empty, no forced_gc: {:data("350.648 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("325.708 kB"), :share("26.616 kB"), :size("379.972 kB"), :text("8 kB")} | 16:03 | |
16:07
brrt joined
|
|||
dogbert17 | it's odd that it suddenly manages to get rid of +200MB, shouldn't it do that anyway, i.e. without setting @sommits=Empty | 16:08 | |
16:13
brrt joined
|
|||
timotimo | something must be erroneously holding on to the arrays | 16:18 | |
i'm not sure if making an array empty with = Empty actually shrinks the underlying storage, though; our arrays basically only grow ... | |||
dogbert17 | but something seems to change | 16:19 | |
timotimo | the objects it was holding will be gone | ||
dogbert17 | fun 'fact', according to the borked --profile 98.24% of the time was spent in 'acquire SETTING::src/core/Semaphore.pm:5' | 16:22 | |
timotimo | not surprising if you know why profiling multithreaded programs breaks | 16:23 | |
the symptom is kind of like getting only the data from any random thread | |||
dogbert17 | aha | ||
timotimo | and since rakudo spawns worker threads for the thread pool, they'll be receiving work items on a queue | ||
all the time they're spending not doing something will be spent waiting for a work item to arrive | |||
and that's where the semaphore lives | 16:24 | ||
and since data from all other threads gets silently dropped, this easily reaches into the 90% range | |||
RSI break time | |||
dogbert17 | rest well | ||
interesting, moving the declaration of 'my @commits' to the outer scope also seems to save ~200MB on a 32 bit system and you no longer have to set the variable to Empty after each iteration | 16:28 | ||
16:44
brrt joined
|
|||
MasterDuke | huh, not what i would have expected | 17:10 | |
17:30
AlexDaniel joined
17:35
domidumont joined
17:47
geospeck joined
17:54
domidumont1 joined
19:36
unicodable6 joined,
committable6 joined
20:24
squashable6 joined
|
|||
lizmat | so I'm looking at making my @a[2;2]; @a["0";"0"] work | 20:30 | |
and now I have a rather unstable MoarVM that crashes about 20% of the time running: | |||
my @matrix[2;2] = [1,2],[3,4]; @matrix[0;0] for ^1000000 | 20:31 | ||
the diff: gist.github.com/lizmat/be498731d5e...f01f42cce5 | 20:32 | ||
20:32
BooK joined
|
|||
lizmat | does anybody see something weird there ? | 20:32 | |
20:33
benchable6 joined
|
|||
lizmat | basically, I create a list_i of indices that are sensibly coercible to Int | 20:33 | |
oohh, rather reliably does: moar(74411,0x7fffcb5183c0) malloc: *** error for object 0x7f832e693308: incorrect checksum for freed object - object was probably modified after being freed. | 20:44 | ||
*** set a breakpoint in malloc_error_break to debug | |||
jnthn timotimo: is that something you could work with ? | |||
code in question is: | 20:45 | ||
my @matrix[2;2] = [1,2],[3,4]; @matrix[<0>;0] for ^1000000 | |||
this either runs for about 3 seconds, or crashes after about 350 msecs | 20:46 | ||
jnthn | I'd feed it to perl6-valgrind-m | 20:47 | |
As a first resort | |||
lizmat | ah, that was when I had a spectest running: without that it runs for about 1 second, or crashes after about 200 msecs | ||
jnthn | In the best case it points at something silly and easy to fix | ||
20:47
committable6 joined,
unicodable6 joined,
AlexDaniel joined,
SourceBaby joined,
Geth joined,
synopsebot joined,
dalek joined,
tbrowder joined
|
|||
MasterDuke | lizmat: with your patch or without? | 20:48 | |
lizmat | about to commit the fix to rakudo | ||
then it's at HEAD | |||
it does make spectest ok | |||
github.com/rakudo/rakudo/commit/00632edb6f | 20:49 | ||
MasterDuke | got valgrind output, gisting now | 20:55 | |
gist.github.com/MasterDuke17/45323...4409368975 | |||
jnthn | .tell brrt SEGV in linear scan allocator: gist.github.com/MasterDuke17/45323...4409368975 | 20:56 | |
hm, no message bot? | 20:57 | ||
MasterDuke | what's the guy's name who runs yoleaux? i think it begins with a d | ||
jnthn | dpk iirc | 20:58 | |
Anyways, in so far as that is isolated to a relatively small piece of code, it's probably not too nasty | 20:59 | ||
(To find and fix, that is) | |||
20:59
yoleaux joined
|
|||
jnthn | \o/ | 21:00 | |
.tell brrt SEGV in linear scan allocator: gist.github.com/MasterDuke17/45323...4409368975 | |||
yoleaux | jnthn: I'll pass your message to brrt. | ||
jnthn | Danke. | ||
MasterDuke | no surprise, but it doesn't seem to segv with MVM_JIT_DISABLE=1 | 21:01 | |
lizmat hopes it's what causing random crashes in make spectest | |||
jnthn | MVM_EXPR_JIT_DISABLE=1 to find out :) | 21:04 | |
Or what MasterDuke said, but the latter is disables less | |||
lizmat | MVM_EXPR_JIT_DISABLE=1 6 'my @matrix[2;2] = [1,2],[3,4]; @matrix[<0>;0] for ^1000000' | 21:07 | |
Command terminated abnormally. | |||
after trying about 10x | 21:08 | ||
yeah, MVM_EXPR_JIT_DISABLE=1 does not prevent it from happening | |||
MasterDuke | yep, valgrind error output looks the same with MVM_EXPR_JIT_DISABLE=1 | 21:10 | |
jnthn | huh, odd | 21:11 | |
MasterDuke | gist updated with some gdb output | 21:14 | |
timotimo | so how can it crash inside expr_compile if the exprjit is disabled? | 21:21 | |
jnthn | Sorry, it's my fault | 21:22 | |
The flag is MVM_JIT_EXPR_DISABLE | |||
Misremembered it :) | |||
(The flag not working is mine, I mean. :P) | |||
Well, I guess in a sense MoarVM in general is too :P | |||
timotimo | aaah | ||
MasterDuke | any reason not to have MoarVM warn if there's an env variable that starts with MVM_ it doesn't know about? | 21:23 | |
MVM_JIT_EXPR_DISABLE=1 doesn't seem to crash | 21:26 | ||
lizmat | confirm, I can't get it to crash with MVM_JIT_EXPR_DISABLE | 21:34 | |
22:02
nebuchadnezzar joined,
japhb joined,
bartolin joined
23:11
Geth joined
|