03:13 vendethiel joined 08:36 vendethiel joined 08:38 harrow` joined 10:33 vendethiel joined 11:12 pyrimidine joined 11:23 brrt joined
brrt good * #moarvm 11:27
i come to ask your oracle for one more question
how can i better deal with the problem of pieces-of-the-inteprreter-that-want-to-know-where-we-are 11:28
in the JIT, that is, where the current state of the art is 'store our position at the start of every *** basic block' 11:29
redundantly, because we treat the JIT 'graph' as a dumb stack 11:30
yes, the graph is a stack, and the tree is a graph. i know 11:31
11:41 mojca joined
brrt the pragmatic approach is to store the jit 'node' belonging to the label somewhere 11:53
that way we can check if a label has already been added and prevent them from being added twice
and then i can factor the dynamic-control-label sequence out as well 11:54
.. the pragmatic approach seems more reasonable now i think about it more 12:08
but i still feel icky about checking the jit entry label everywhere
jnthn .tell brrt We might want to make a list of the things that do need to know, so we can get an overview of them. 12:27
ah, no teller here :) 12:28
12:52 vendethiel joined 12:53 mojca joined 13:45 FROGGS joined 13:52 vendethiel joined
nine Where do SC names like 1EDBBBE0B7DA22E0D938F7A73B4CAD9D9D472056 in mbc files come from? 14:09
FROGGS they are stored in the string heap
moar --dump foo.moarvm should show them at the top
nine Yes, that's where I got it from. But what do they mean? How are they generated? 14:10
I noticed that the "Missing or wrong version of dependency 'p6scalarfromdesc'" only occurs when loading a Panda::Common that was precompiled during installation. If precompiled on first load, everything's fine. 14:11
The precomp file created on installation has one more SC compared to the other one: SC_0 : 1EDBBBE0B7DA22E0D938F7A73B4CAD9D9D472056 14:12
14:13 patrickz joined
nine All other differences between the dumps of the precomp files seem innocent enough. 14:15
FROGGS every literal string and every CU-id of a dependency as well as their paths go in there
when precomping stuff obviously
14:16 vendethiel joined
nine That means it picks up an additional dependency by referencing objects of an additional SC when being precompiled as part of compilation? 14:17
FROGGS when you precompile stuff and something extra is in memory, such a precompiled file will eagerly collect these strings 14:20
I had the same problem when doing the precomp stuff in CURLI
nine Ok. The odd thing is that precompilation runs in an external process. The good news is that this could only mean that there's a difference in how this process is run, i.e. the command line or ENV variables. 14:21
Ought to be easy enough to find out the differences there.
moritz aw
sorry, typo
FROGGS good luck
14:43 vendethiel joined
nine Ok, there's definitely something weird going on with the repository chain 14:52
14:54 mojca joined 14:55 lizmat joined 15:07 vendethiel joined 15:53 vendethiel joined 15:56 zakharyas joined 16:19 mojca joined 16:26 lizmat joined 16:33 vendethiel joined
jnthn nine: I think those IDs are the handle of the Serialization Context; look for handle in TOP or maybe comp_unit in Perl6::Grammar. 16:38
16:53 mojca joined 16:59 vendethiel joined
nine Ok, one step further: it doesn't seem to be Panda::Common at all. I can now repro the bug even with a Panda::Common compiled on first load. So it's probably somehow related to its dependencies. 17:06
FROGGS does it boil down to a 'use lib' or a PERL6LIB env var? 17:07
nine No. And I've already fixed the bug in setting up the repo-chain for precompilation, so the process to recompile Panda::Common is run in exactly the same way
17:17 geekosaur joined
dalek arVM: 79dce11 | FROGGS++ | src/io/dirops.c:
get directory listing in utf8-c8 encoding
17:47
17:53 vendethiel joined
nine I can still reproduce a Missing or wrong version of dependency '<unit-outer>' after stripping Panda down to the Panda::Common module, this module down to "use Shell::Command;", Shell::Command down to "use File::Find;" and File::Find down to "unit module File::Find;" 18:02
18:09 domidumont joined 18:10 domidumont joined 18:15 vendethiel joined, domidumont joined, cognominal joined 19:55 brrt joined
timotimo every time i see find_best_dispatchee show up in my code i get in the mood to flip tables 20:14
and the way in which the profiler sometimes shows times and entry counts as "NaN" doesn't make it any better 20:15
jnthn Why? It normally means you've a good opportunity to make a big saving :)
Yes, why does it do that...
brrt don't work on a late bound dynamically typed language with multiple dispatch then :-P
timotimo the last time i looked, it showed up in dispatching infix:<+>
tell me, how does that make any sense? 20:16
i think i want an nqp op that does the backtrace dump and nothing else
jnthn timotimo: Well, remember the first call will always go through find_best_dispatchee 20:17
timotimo and just throw that into the find_best_dispatchee function
jnthn timotimo: So it's only an issue if you have a *lot* of those calls for a given function
timotimo hm. it accounted for 90% of time spent in infix:<+>, so maybe the rest of infix:<+> is so fast that it won't ever make up for that?
i don't remember how often it invoked that, tbh
i'll try to reproduce that case
357642 entries in total 20:21
356241 to find_best_dispatchee, 356240 to infix:<+> 20:22
gen/moar/m-CORE.setting:9195
and 1 entry to infix:<+>
gen/moar/m-CORE.setting:9198
jnthn Do you have an overlord of it? 20:23
timotimo none that i'm aware of
definitely none in the code i'm compiling here 20:24
do you have a working SDL2 for perl6 locally, or should i try to golf the SDL parts out of it? 20:25
i probably should do it just because
jnthn timotimo: The other thing to check is if we're somehow filling up the multi dispatch cache 20:26
timotimo is it a global thingie? 20:27
because i could imagine that my code does fill that up if it's global 20:31
jnthn timotimo: No, it's per routine 20:50
But has a limited number of entries
20:51 lizmat joined
timotimo so how do we get entries in there for infix:<+>?! 20:56
also, i can't reproduce it with the same code on consecutive runs. perhaps because i just recompiled rakudo and therefor it has to re-precompile 20:57
yeah, after rm-ing all files under ~/.perl6/precomp, i get the find_best_dispatchee being called a whole bunch of times. the next time i run it, it's gone. 21:13
i feel like we want an op to invalidate all multi dispatch caches and fire that when precomp is and other start-up-business cases are handled 21:20
jnthn timotimo: Doesn't need an op 21:23
timotimo: You can just null it in the Routine
timotimo oh
jnthn You could also try increasing the multi cache size to see if it makes the issue vanish, then we'll knwo that's what is going on :) 21:24
timotimo in which of our things does that magic number live? moar, nqp or rakudo?
jnthn MoarVM 21:26
MVMMultiCache.h or so
timotimo 'k
er, huh. 1482070 BOOTHash allocations in AT-POS, in sink it's 1068153 and in Bridge there's another 718436 21:27
as that from making the profiler aware of signatures allocating stuff? 21:28
another sink, another Bridge
jnthn Probably slurpy hashes, yeah
timotimo could that be because methods have *%_ ? or because of proto methods? 21:29
jnthn Could be *%_ 21:30
onlystar protos make a callcapture instead
timotimo i'm not sure how exactly to investigate this 21:35
the AT-POS in question is the numarray role's AT-POS(array:D: int $idx) is raw { nqp::atposref_n(...) }
jnthn I think if we can lexical-to-local lower the %_s then it may just go away anyway 21:36
timotimo m: my num @foo; say @foo.^find_method('AT-POS').cando(:(@foo: 1)).perl
camelia rakudo-moar b6f3ec: OUTPUTĀ«===SORRY!=== Error while compiling /tmp/z8mconLC89ā¤Can only use the : invocant marker in the signature for a methodā¤at /tmp/z8mconLC89:1ā¤------> o.^find_method('AT-POS').cando(:(@foo: 1ā)).perlā¤ expecting any of:ā¤ constraā€¦Ā»
timotimo m: my num @foo; say @foo.^find_method('AT-POS').cando(:(@foo, 1)).perl
camelia rakudo-moar b6f3ec: OUTPUTĀ«Type check failed in binding $c; expected Capture but got Signature (:(@foo, Int $ where {...)ā¤ in block <unit> at /tmp/GmOZhrB5ml line 1ā¤ā¤Ā»
timotimo m: my num @foo; say @foo.^find_method('AT-POS').cando(\(@foo, 1)).perl
camelia rakudo-moar b6f3ec: OUTPUTĀ«(method AT-POS (array:D $: Int $idx, *%_) { #`(Method|60354656) ... }, method AT-POS (Any:D $: Int:D \pos, *%_) { #`(Method|44421168) ... }, method AT-POS (Any:D $: Any:D \pos, *%_) { #`(Method|44421472) ... }, method AT-POS ($: **@indices, *%_) { #`(Methoā€¦Ā»
timotimo hard to tell what exactly is going on there 21:37
um ... something's gone horribly wrong with moar --dump 21:38
it seems to crash
let me see ...
jnthn urgh
I wonder if it was anything to do with the string decoding changes... 21:39
timotimo Callsite_144 :
num_pos: 2[Inferior 1 (process 13099) exited normally]
probably not flushed stdout?
jnthn yeah, that was perhaps not the last thing really
timotimo strace -e write makes it work 100% 21:40
because ... fuck you, that's why
jnthn o.O
valgrind it :)
timotimo valgrind also makes it complete. i saw one invalid write, which resulted from writing 2 out of 4 bytes beyond the end of "lineloc" 21:42
hehe
looks like a sizeof multiplication was missed
is "fflush" the right function to use when using "printf" and friends? 21:46
like, "fflush(stdout);"?
actually, maybe fsync
jnthn fflush is probably right here 21:47
timotimo 'k
it still just aborts in the middle of business o_O 21:48
oh, i was wrong 21:56
valgrind doesn't let it finish properly
it's also not reaching the end
the dump output weighs in at about 32 megabyte, according to strlen 22:02
22:19 lizmat_ joined
orbus not really news, but nice to know - I just built moar and nqp on a raspberry pi 3 running fedora 23 with no issues 22:21
it passes the nqp test suite - building rakudo now
timotimo neat 22:22
probably a whole bunch faster than on a rpi2, too
orbus I wasn't really watching the time, but I think so 22:23
timotimo jnthn: i found the frame in the CORE.setting dump, but i can't see it write to or even read from its lex_Frame_7847_%__obj
orbus it's clocked 300 MHz higher
timotimo jnthn: would "takedispatcher" ever allocate a BOOTHash? 22:24
jnthn timotimo: No, it never allocates nowt 22:25
orbus I don't have a heatsink or fan or anything on there - with all 4 cores cranking 100% the case is slightly warm to the touch
timotimo the speshed result of that AT-POS is just sp_getarg_o, sp_getarg_i, takedispatcher, atposref_n, return_o
orbus: allegedly the rpi3 has a real heat problem if you don't put a heatsink on it 22:26
orbus so far so good
I wonder if there's a heat sensor
hmmm
it doesn't seem all that hot just based on touching the case 22:27
timotimo the sink (of class List), which is really an empty body and --> Nil in the signature, which is speshed to takedispatcher, wval, return_o, also allegedly allocates a huge amount of BOOTHash 22:28
1068153 invocations of that sink method, which turn into 1068153 allocations of BOOTHash
orbus looks like temperature sensor is exposed in /sys... let's see here...
timotimo so at least the numbers match up in a 1:1 fashion
it's a bit annoying that i can't get at the post-profile bytecode 22:29
orbus looks like it's at 54C 22:31
that doesn't seem that hot
hrm... well, I'm running headless too so the gpu is doing nothing - that probably helps 22:37
timotimo the profiler may be a bit borked actually
jnthn timotimo: Well, noticed it doesn't like to close its windows sometimes... 22:40
timotimo yeah, well, that's just, you know, a UI issue
(you can click outside the window instead)
what i mean is maybe i broke it when i moved the name out of the allocations array 22:41
i don't know what's wrong :\ 22:46
it doesn't make sense to see all those BOOTHashes
json_xs undoes the sorting of hash keys when pretty printing -_- 22:49
it can, however, do it by itself somehow
it doesn't expose that option to the commandline, though 22:50
timotimo has a local copy of json_xs that sets "canonical" 22:54
23:00 vendethiel joined
timotimo i think i'll give the profiler IDs starting at 0 23:03
cool, only three-digit IDs are in this profile now 23:12
down from about 8 digits per ID
er, up-to-three-digit IDs of course
jnthn timotimo: Well, the BOOTHash is perhaps coming from the slurpy hashes 23:15
It'd make sense there'd be plenty of 'em if we're not eliminating them
Sleep time...'night o/
timotimo well, i don't see an op to generate these hashes 23:16
so where do they actually come from? what allocates them?
and if i don't see an op to generate them, how does the profiler know?
23:24 Unavowed joined
timotimo ... ?!? 23:32
i added debug spam to add_instrumentation to output the frame's name and its cuuid
AT-POS doesn't show up at all, neither in its name nor do the cuuids i've logged 23:33
but they do show up in the spesh log