00:17 tokuhiro_ joined 02:14 zakharyas joined 04:07 vendethiel joined 04:27 ShimmerFairy joined 06:26 FROGGS joined, Ven joined 06:29 aiacob joined 06:32 _alexander joined
_alexander moarvm is taking about 35MB on my laptop. is this normal? 06:37
FROGGS _alexander: 35MB to do what? 06:40
_alexander: err, is this about disk space or RAM?
_alexander run a perl script that waits for user input and closes
ram
FROGGS ohh, that's pretty good then
a `perl6 -e ''` used to take more than a hundred megabytes of RAM 06:41
_alexander great improvement then
FROGGS aye 06:43
_alexander it's faster than i thought it would be too
i was told it was slow but so far i haven't found any real bottlenecks 06:44
FROGGS it is 38,6MB to sleep forever on my box
_alexander cool that's more than small enough for me. i'm really enjoying perl 6 so far 06:48
how is threading? any gotchas there? 06:49
FROGGS no that I know
not*
_alexander excellent. thanks for the help FROGGS! 06:51
FROGGS :o)
masak yay, a satisfied customer! :) 07:17
07:36 Ven joined
_alexander i'm very satisfied, in fact. it's great to use a language/platform that's easier to define by what it can do. 07:52
as opposed to what it can't do. (ie most languages)
that's the long version of 'thanks devs. keep up the great work' ha 07:55
07:56 _alexander left
FROGGS is happy 08:03
08:33 Ven joined 08:44 Ven joined
jnthn So nobody got to le jitter bug, eh? 08:45
08:57 Ven joined 09:06 brrt joined
brrt ok, ehm, the new pull request actually makes a lot of things const that i don't see the reason to 09:07
09:11 cognominal joined 09:40 Ven joined 10:22 tokuhiro_ joined 10:57 aiacob joined 10:58 tokuhiro_ joined 11:16 Ven joined 11:26 Peter_R joined, Ven joined 11:31 brrt joined 12:01 zakharyas joined
brrt jnthn, no, not yet 12:06
12:17 cognominal joined
jnthn OK, mebbe I will later then :) 12:33
brrt on my bike i had a good idea on how to create an inlineable frame with dynlex value depending on the call stack 12:35
now i've forgotten 12:36
jnthn The extents list search really *should* work though 12:37
brrt hmm 12:38
yes
jnthn So either the labels of the inlines are hosed (most probable) or the current label is (which would be weird 'cus we rely on that for returning to work)
brrt not... necessarily
jnthn Oh?
brrt i don't think getdynlex is invokish?
or whichever opcode is compiled from it 12:39
my point is, it'd be possible for the label assigned to the reentry label to be just outside the bounds of the inline label 12:40
although i'd have to say it'd be very weird
jnthn brrt: But at the point the wrong lookup happens, then we're some call frames deeper
As in, we've done an invoke from the inlined frame
brrt true
you are right 12:41
hmmm
jnthn That doesn't rule out us being busted in the way you described additionally ;)
brrt but the inlines must be at the end of basic block boundaries, right?
jnthn aye 12:42
brrt hmmm
jnthn I think I did once do a JIT fix to make sure we end up with the label right for inlines too
'cus we sometimes missed exception handlers in inlined code
brrt i recall that, yes 12:43
jnthn But that was exactly for the throw/handler being in the same frame, iirc
12:44 FROGGS joined
brrt hmm and we're sure we're not looking in the same frame for the dynvar? 12:45
jnthn We're looking at a dynvar declared in an inlined frame 12:46
But the point we do the lookup, we're a few frames deeper in the callstack
And the problem is we miss the declaration we should find
brrt how even...
jnthn 'cus the label doesn't fall within the inline extents list that it should... 12:47
brrt well, yes 12:48
:-)
or we're not reading as many inlines as we expect 12:49
jnthn Or that
brrt what would prevent a frame from being inlined
jnthn See the first thing in inline.c
That's the logic that decides 12:50
brrt i see 12:53
13:00 tokuhiro_ joined 13:18 Ven joined 13:48 Ven joined 14:00 robertle joined 14:06 dalek joined
jnthn Damn, update_ops.p6 feels a load faster than it used to 14:52
FROGGS jnthn: run it with perl6-j for some romantic feelings :o) 14:54
jnthn :P 14:55
15:01 tokuhiro_ joined
dalek arVM: 663e71a | jnthn++ | src/6model/containers. (2 files):
Add a can_store function to container v-table.
15:03
arVM: d28e310 | jnthn++ | / (6 files):
Add isrwcont op.

For checking if we have a rw container.
hoelzro o/ #moarvm 15:13
nwc10 good UGT, hoelzro
hoelzro o/ nwc10 15:14
did GH notifications stop working for a bit? or does dalek not post info on new branches? 15:16
either way, could someone look at my branch github.com/MoarVM/MoarVM/tree/defer-close-stdin and tell me if that seems sane?
15:24 tokuhiro_ joined
jnthn hoelzro: Since there's only one event loop thread and you only seem to modify si->state on the event loop, the mutex lock/unlock isn't needed, I think. 16:03
16:20 FROGGS[mobile] joined 16:30 FROGGS[mobile]2 joined 16:39 FROGGS[mobile] joined 16:43 cognominal joined 16:57 dalek joined
hoelzro jnthn: ah, ok, I was seeing some wonkiness there 17:28
I think because I was querying si->state outside of the async thread?
jnthn: other than that, looks good? 17:31
on another note, is there a way to tell if an MVMArgProcContext is the sole owner of its callsite? I found a memory leak where MVMCallsite objects aren't getting freed (rt.perl.org/Ticket/Display.html?id=126183) 17:38
TimToady that might explain the memory leak I see in rosettacode.org/wiki/Numerical_inte...ion#Perl_6 17:42
hoelzro TimToady: I've observed it when using subsignatures, but it may not be limited to that 17:43
whenever I try to fix it, I segfault Moar ='( 17:45
18:04 FROGGS joined
TimToady I reduced my memory leak to: loop { 0, .1 ... 1000 } 18:46
there is an implicit * + .1 call generated in there, but it leaks with an explicit function as well 18:47
18:48 ggoebel2 joined
TimToady I guess I'll RT it 18:49
RT #126189 19:19
jnthn hoelzro: The usecapture thing is...more delicate than I'd like... 19:41
hoelzro =/
jnthn It was an optimization to avoid an allocation/copying
hoelzro jnthn: what about having a refcount on the callsite object? would that be acceptable?
I'm willing to put in the work, I just don't want to start down a path we'll end up rejecting 19:42
jnthn But...the memory mangement of it is fraught, and we only really hit it on slow paths anyway
hoelzro: I'd do a perl6-bench run, then make usecapture do exactly what savecapture does, then do another perl6-bench run, and if they come out the same, then we just make usecapture do exactly what savecapture does 19:43
hoelzro it looks to me that savecapture is the problem, though
jnthn Really? 19:44
savecapture makes a copy so it knows it can free it
usecapture points to the live callsite so we have to try not to get into trouble
hoelzro it does, but it doesn't free it
jnthn Oh
If it's its own copy, why? :S 19:45
hoelzro I'm trying to determine the condition in gc_free under which freeing is ok
jnthn I *thought* the ideas was "if savecapture always, if usecapture never"
hoelzro so there's the condition: github.com/MoarVM/MoarVM/blob/mast...ture.c#L61
however, effective_callsite is always equal to callsite in the situation I'm seeing 19:46
due to github.com/MoarVM/MoarVM/blob/mast...rgs.c#L105
and github.com/MoarVM/MoarVM/blob/mast...rgs.c#L111
jnthn Oh...it's trying to make sure it doesn't free interned callsites and only those creating due to flattening, I think 19:47
hoelzro sounds right 19:48
21:30 tokuhiro_ joined 23:31 tokuhiro_ joined
hoelzro hmm...I have a fix in place that stops the leak, but now MVM segfaults when building Rakudo =/ 23:41
if I MVM_SPESH_DISABLE=1, it builds fine
src/spesh scares me
timotimo :( 23:57