00:04 vendethiel joined 00:29 vendethiel joined 01:03 colomon joined 01:11 vendethiel joined 01:29 colomon joined 02:22 colomon joined 02:36 colomon joined 04:02 FROGGS_ joined 04:03 rurban joined 05:03 nebuchad` joined 05:06 btyler_ joined 05:09 betterwo1ld joined 05:19 vendethiel joined 06:23 vendethiel joined 07:10 virtualsue joined 07:34 vendethiel joined 07:36 lizmat joined 07:53 Ven joined 07:55 rurban joined 08:08 FROGGS joined 08:15 brrt joined 08:25 zakharyas joined 08:44 vendethiel joined 08:53 virtualsue joined 09:02 virtualsue_ joined 09:13 lizmat joined 09:14 kjs_ joined 09:21 kjs_ joined 09:35 lizmat joined 09:38 vendethiel joined 09:45 Ven joined 10:22 vendethiel joined 10:39 kjs_ joined 10:41 donaldh joined 10:43 kjs_ joined 11:08 dalek joined 11:22 vendethiel joined 11:35 lizmat joined 12:13 kjs_ joined 12:19 vendethiel joined 12:23 [Coke] joined 12:28 Ven joined 12:51 kjs_ joined 13:16 kjs_ joined 13:25 ShimmerFairy joined 13:27 virtualsue left 13:43 vendethiel joined 13:52 kjs_ joined
jnthn oh no, the MSVC build is busted... 15:01
15:02 vendethiel joined
jnthn timotimo: I'm very confused why MVM_spesh_graph_create duplicates logic that's already in MVM_spesh_alloc? 15:04
Oh, but it divides the buffer size by 4... 15:05
15:06 Ven joined 15:10 Ven joined
jnthn cleans things up a bit 15:11
15:13 Ven_ joined 15:14 Util joined
dalek arVM: 5824c03 | jnthn++ | src/spesh/graph. (2 files):
Clean up/improve spesh mem allocation.

Do the "first block is smaller" optimization in one place. Also remove the artificial size limit on spesh allocations. As a side-effect, fix the MSVC build.
15:17
15:18 Ven joined 15:42 kjs_ joined
jnthn Aww, I think the patch that disabled lazy deserialization a while ago may actually have also caused duplicate work... 16:05
16:35 Ven joined
japhb jnthn: Was the lazy deserialization bug ever figured out (separately from fixed, I mean just the "diagnosed" part)? 16:49
jnthn japhb: No :( 16:55
17:00 vendethiel joined 17:04 avuserow joined
jnthn japhb: Also it may have been a bug that got fixed in the meantime 17:06
lizmat meanwhile, startup has deteriorated from .205 to .247 on my machine 17:29
:-(
jnthn Ugh
Any idea what caused it?
A local patch I have here makes it somewhat worse, also. But I've not pushed that.
lizmat I was looking at that yesterday and thought I bisected it down to a set of 3 nqp commits
jnthn I've got a really weird pre-comp test fail as a result of fixing a different pre-comp bug. 17:30
lizmat NQP revision bump -2015.02-68-g2e5e413 17:31
+2015.02-73-gd362467
is what seemed to be the one when I last looked at it yesterday
lizmat continues backlogging
jnthn takes a break for dinner 17:36
17:38 avuserow joined 17:40 vendethiel joined 18:49 vendethiel joined 18:53 zakharyas joined
jnthn Think I figured out the bug... 19:01
We cache SC indexes in object
And STables
And if you move the object to a new SC due to repossessio, I'm not sure it's doing the update... 19:02
Hm, or maybe not 19:03
bbiab
19:13 kjs_ joined
timotimo jnthn: now you make the first spesh memblock huge and the rest tiny? :) 19:45
jnthn timotimo: Huh, I don't think I got it backwards? 20:18
size_t buffer_size = g->mem_block
? MVM_SPESH_MEMBLOCK_SIZE
: MVM_SPESH_FIRST_MEMBLOCK_SIZE;
If we have a mem block already, the next one sould be the normal size, otherwise use the smaller size for the first one.
timotimo +#define MVM_SPESH_FIRST_MEMBLOCK_SIZE 32768
+#define MVM_SPESH_MEMBLOCK_SIZE 8192
:)
jnthn oh, darn
timotimo s'ok
jnthn Yeah, those numbers want to be the other way around 20:19
timotimo++
Do you want to fix, or shall I?
timotimo i have another commit in a branch that ramps up the spesh block size in three steps
because my measurements showed that about half of all spesh graphs ended up allocating only the first 50% of the second block :) 20:20
though if it's just barely above 50%, having a third step in there would actually make it *worse*
20:21 kjs_ joined
jnthn At some point it's maybe not worth the code complexity. 20:22
Also we should retire dead-end specialization attempts at some point too
Not to mention compute the specializations on a different thread... :)
20:24 kjs_ joined 20:27 rurban joined
timotimo i've been thinking about retiring spesh things that haven't had the chance to get finished for a long time 20:34
i just don't really know what to hang it off ot
of*
jnthn Me either, so I decided to kick it down the road until a larger re-work
timotimo has kind of forgotten what things he could maybe be doing 20:36
jnthn Could you add JIT support for the getregref_* ops, maybe? 20:37
Well, and you might want to take some of the localref work off my hands in general... 20:38
timotimo at some point you were saying we could kick the decont_all thing from our code-gen of nativecall invocation?
jnthn Only as part of a bigger re-visit of how we handle native invocation. 20:39
timotimo OK
jnthn But yeah, getting lex -> loc stuff in Perl6::Optimizer able to handle more cases again would help with perl6-bench's results 20:40
Other thing you might like to look at
uh, no, forget that :)
timotimo i could; i didn't look at the references things at all yet, so i don't have sufficient clue how to make the optimizer happier yet
jnthn Well, at the moment we can't take a refernce to a register 20:41
I mean, moar can
timotimo jvm can't?
jnthn But I didn't implement scope localref
No, JVM we can't, we'll have to just not lower.
But we shouldn't not do an optimization possible on one backend jsut 'cus the other doesn't have it. 20:42
timotimo right; did you mean scope localref isn't implemented on moar yet? 20:43
jnthn I meant that moar has the things we need in order to implement QAST::Var with scope localref
And on JVM we won't be able to do that - or at least, if we do it won't beat the current scheme we have for lexicals. 20:44
I was expecting to add something to HLL::Backend so the optimizer can ask if the backend supports localref scope
timotimo that seems like a very simple thing to do, just a little method 20:45
jnthn Right, that bit is easy.
timotimo would the optimizer turn a QAST::Var of some scope into a localref scoped var?
or is that for the code-gen to do?
jnthn Implementing localref scope in the QAST -> MAST is more.
Perl6::Optimizer will turn some things with lexical scope into having local scope if it can prove it's safe.
timotimo i remember that part from last year :)
jnthn However, if it sees a lexicalref anywhere then it now backs out of the optimization. 20:46
Which is a bit unfortunate given
my int $i = 0; ...while loop using $i that would love it to be local... say($i); # passes it by reference.
timotimo can nqp benefit from the references stuff at all? 20:59
i think i've asked this question before
jnthn Doesn't really need it 21:01
timotimo OK
21:04 dlem joined
dlem o/ 21:05
jnthn: Thanks a lot for the session on Tuesday, inspiring stuff! 21:06
jnthn dlem: Welcome; I had fun doing it. :)
timotimo jnthn: currently we skip the variable (or rather: the block) if we see a lexical that's been refered to by a lexicalref; am i right in assuming that if the var had been declared lexicalref from the start, it'd be fine?
jnthn timotimo: Not really in so far as we don't lower those declarations at all either, afaik. 21:07
dlem jnthn: I had a thought afterwards (probably a really stupid one :) which I'd like to pass by you.
21:08 FROGGS[mobile] joined
jnthn dlem: Sure. 21:09
dlem We discussed automatic destruction of objects upon leaving a scope.
And then I remembered that you talked about escape analysis.
Would it be possible to strike two flies with one swat, as we'd say in Norway? :-) 21:10
If you're going to store objects in a temporary area outside of the GC, I reckon you'll have to destroy the objects in the temporary area when leaving the scope anyway. 21:11
timotimo jnthn: i'm not 100% sure how this is supposed to play out :\
jnthn dlem: Thing is that escape analysis is very much an optimization, so relying on it for semantics is likely a bad idea. 21:12
dlem: And many, many things can thwart it. 21:13
dlem I see. Well, I couldn't resist asking :-)
jnthn It *is* an interesting thought
It'd be possible to design a programming langauge that relied on escape analysis for its semantics, I guess, but I suspect doing so ties your hands in a lot of other ways. :) 21:14
dlem I for one wouldn't mind if I had to enable optimizations to get the "Perl 5" behavior here. 21:15
timotimo but you're not guaranteed to get the perl 5 behavior even with the optimization 21:16
dlem If the escape analysis worked perfectly, why not?
timotimo hmm 21:17
dlem And just think how it would help with porting Perl 5 programs ;-)
jnthn I suspect you don't get very far before the analysis comes back inconclusive. 21:19
Escape analysis is typically used to eliminate short-lived allocations.
And determine you can elide lock taking. 21:20
dlem jnthn: Yes, there are surely problems which I don't see. The less you know about something, the easier it appears :-)
jnthn :) 21:21
timotimo jnthn: how is lexicalref supposed to behave? turn into a localref or something?
jnthn timotimo: Well, if we get this right then it's just lexical -> local, lexicalref -> localref 21:22
timotimo: The thing to understand is that there's actually a 2x2 grid of possibilities that each compile differently.
decl(lexical, lexicalref) X use(lexical, lexicalref) 21:23
Declared lexical + access lexical = just a normal lexical lookup of the correct type
dlem In my simplified view on escape analysis, I reckoned that since you were going to identify the short-lived allocations, this would include all allocations of variables leaving the scope without any references to them.
Chuckle now if you like :-) 21:24
jnthn dlem: Yes, but file handles usually will be (a) allocated in the scope of some open(...) routine, and thus be escaping it, then (b) be passed to various I/O methods or subs
And it's somewhat unlikely that you'd end up with those being inlined.
dlem Hmm, yes, I guess it gets to be a lot to keep track of. 21:26
jnthn Yeah. There was a paper I read that actually calculated re-usable escape info, so if you could figure out all of your callees you could do a bit better
But it was extremely complicated. 21:27
timotimo jnthn: and decl lexicalref and use lexicalref can turn into a localref, yeah?
jnthn timotimo: Correct
timotimo: The idea for Perl6::Optimizer is that you s/lexical/lexicalref/ and s/local/localref/
timotimo: Note that all of the following can happen:
Declared lexicalref, accessed lexical (becomes declared localref, accessed local) 21:28
Declared lexicalref, accessed lexicalref (...)
Declared lexical, accesed lexical (...)
Declared lexicalref, accesed lexical (...)
dlem In any case, now that I've planted this half-baked idea in your mind, who knows what will eventually happen ;-)
timotimo wait, decl lexical, accessed lexical turns into localref? 21:29
jnthn timotimo: No
I was assuming you could s/lexical/local/ :)
timotimo ah, i was meant to fill the ... myself
jnthn Right :)
timotimo %)
jnthn So the Perl6::Optimizer's job is just to make sure it turns all lexicalref => localref, and all local => localref. The tricky work is in the QAST -> MAST bit. If you look at lexical and lexicalref in QAST -> MAST, you'll see it handles all 3 permutations. 21:30
timotimo so basically "the ref or not ref stays"
jnthn uh, all *4*
Correct
timotimo underneath that, just turn lexical into local
jnthn And we need to handle all 4 permutations when implemetning localref
dlem jnthn: Keep up the good work, it's truly inspiring to see the great progress you're making! 21:31
jnthn So the hard part isn't in Perl6::Optimizer at all, but in the code=gen.
dlem jnthn: And thanks again for the session.
jnthn dlem: Thanks! And thanks for the ideas. :)
timotimo damn
but i suppose i could look into the code gen 21:33
dalek arVM: 6535f92 | jnthn++ | src/6model/serialization.c:
Cope with STable change in repossessed objects.
21:34
arVM: c6e8df8 | jnthn++ | src/6model/serialization.c:
Further fixes to repossession.

Now we update the STable, we must take care to repossess all STables
  *before* any of the objects, otherwise we end up with a huge mess of
re-deserialized STables. Additionally, we update the SC of things we repossess at deserialization time.
jnthn timotimo: Well, you can see how I did lexical/lexicalref
timotimo: And local/localref will be simpler than that.
timotimo: Because you have no late-bound cases to handle.
timotimo hmm, yeah, compile_var is kind of huge already %)
jnthn :) 21:35
dlem bids everybody a nice evening
See you!
jnthn you too, dlem o/
That second serialization patch took me longer to figure out than it shoulda...
Now my "kill PARAMETERIZE_TYPE" work has only one remaining casualty.
timotimo is that only a correctness patch or does it also improve memory usage or serialized blob size? 21:36
jnthn It's possible we were overly detecting repossessions. 21:38
So could be a win. 21:39
Primarily correctness.
timotimo good, as i thought
jnthn++
21:39 kjs_ joined
jnthn It may well fix some other lingering pre-comp bugs 21:39
(Between the two patches, at least.) 21:40
Ah, and my last fail is thankfully not serialization related at all
It's that ^foo methods don't interact with inheritance.
That one I can fix. 21:41
Probably easily
But first, a stroll :)
timotimo have a good stroll :) 21:42
so what i'm doing as a first step is add a localrefs and a localref_kinds attribute to BlockInfo as well as the accessor methods 21:46
hm ,will we ever have contvar localrefs? 21:47
i'll let %!local_vars_by_index be shared among locals and localrefs, that seems right to me 21:51
hm, since locals share their namespace with localrefs, why have a separate localref_kind from local_kind? 21:52
hm, ok, so if something's declared lexicalref and we access it as lexical, we just access the lexicalref and decont it 21:55
that makes sense; i expect the localref stuff would look the same
22:03 japhb joined 22:06 kjs_ joined
timotimo and for _o localref doesn't make sense 22:08
jnthn A lexicalref decl always goes in an _o 22:12
Same with localref
timotimo since localrefs behave like containers, i can just use "set" on them to put a value in there?
oh? i didn't realize that
jnthn localref => localref is just a set 22:13
timotimo why do we have %!lexicalref_kinds then?
jnthn local => local is just a set
Because we need to know what kind of native it references
timotimo getregref is what i do if i have a decl local and access it as localref? 22:14
jnthn Correct
timotimo ah, you mean the allocated register has an _o because a *ref is an object
jnthn And in the other direction, decont_[ins]
timotimo that part i understand
jnthn Yes
timotimo good
jnthn I mean it's an o register for refs
timotimo and if a BINDVAL is set, i just emit a piece of code to put the value of BINDVAL into the localref
or the local, if i have a :decl(local) and access it as localref 22:15
jnthn Note that binding is forbidden in varius cases. 22:16
You can bind local to local, and you can bind an object to a localref
Just follow the lexical[ref] rules for locals too 22:17
jnthn picks music, a beer, and digs into fixing the last regression from cleaning up parameterization 22:19
timotimo OK 22:26
22:50 kjs_ joined
jnthn waits for the JVM build to get done to see if his port of the serialization fixes will help there also 22:50
timotimo <3 22:54
now how the hell do i test the codegen? %) 22:55
jnthn Add a test to qast.t? :) 22:56
oh, though you'll only want ot run it on moar 22:57
timotimo aye
jnthn So you may want to steal the qast.t approach and then put the test in the t/moar folder
timotimo my $valmast := self.as_mast_clear_bindval($*BINDVAL, :want($res_kind)); 23:12
(this is for localref, localref)
if i want to restrict this to only passing obj
would i just put :want($MVM_reg_kind_obj)?
and that would appropriately error out if the wrong kind of thing gets passed?
jnthn $MVM_reg_obj 23:13
Won't error out, will coerce
But that's fine
Note that lexicalref code for bind does $MVM_reg_obj there also :) 23:14
timotimo ah
i confuse myself all the time 23:19
am i correct in seeing that there are no lexref tests in qast/ yet? 23:51
jnthn Yeah 23:56
Partly 'cus I could easily cover it from Rakudo 23:57
Partly 'cus I had no intention of implementing this stuff on Parrot.
timotimo OK
i have my first test written
gist.github.com/timo/b68416dedda71b72a458 does it seem like i have the right idea in general? 23:58