01:06 FROGGS_ joined 01:10 woosley left 01:25 woosley joined 01:26 jnap joined 02:58 btyler joined 06:46 FROGGS joined 06:51 woolfy joined 06:55 zakharyas joined 07:07 oetiker joined 07:21 woolfy left 07:45 brrt joined
brrt \o #moarvm 07:46
FROGGS o/ brrt
brrt (webchat clients ftw :-)) 07:47
FROGGS :o)
true about that
brrt i've a blog post in my notebook about how nice it is that moar is a 'register' vm for purposes of compilation 07:48
i don't have my notebook right now, so i can't type it up
if jnthn is here, i've read about how invoke works for REPR's 07:50
long story short, i think it'd be sensible to create a jitcode repr
08:11 lizmat joined
jnthn brrt: Potentially 08:25
FROGGS hi jnthn 08:29
brrt but.... i still find it very hard to think about :-) 08:35
the reasoning is as follows: using a repr, we can provide for the invoke method and get passed all the right arguments
using a repr, we can use our own stack map walking (for gc), if we choose to do so 08:36
we get a great amount of liberty for what seems like a small price
and i don't think the core interpreter will have to be changed by a great amount, which is always nice 08:37
09:32 zacts joined 12:14 colomon joined 13:10 colomon joined 13:13 lizmat joined 13:43 btyler joined 13:56 woolfy joined 13:57 woolfy joined 14:00 tgt joined 15:42 woolfy left 15:43 FROGGS joined
nwc10 jnthn: does anything in NQP write MVMIterBody from NQP code? 16:24
ie, is my SEGV because NQP has a different idea about alignment of C structs than the C compiler?
jnthn No, it's always iterated through repr ops 16:30
Or other things in MVMIter.c
In fact I don't think anywhere pokes into it except things in MVMIter.c 16:31
nwc10 OK. hmmm
OK, problem, I think, is this: 16:32
#define OBJECT_BODY(o) (&(((MVMObjectStooge *)(o))->data))
couple that with this: 16:33
struct MVMIter { MVMObject common; MVMIterBody body;
};
oh, OM NOM NOM goes irssi
botherit: 16:34
(gdb) p &(((MVMObjectStooge *)(o))->data)
No symbol "MVMObjectStooge" in current context.
jnthn Ah, and MVMIterBody doesn't align the same way as a void*? 16:36
nwc10 exactly
(gdb) p data - (char *)0xb62f8d3c 16:37
$32 = 20
(gdb) p &((MVMIter *)0xb62f8d3c)->body
$34 = (MVMIterBody *) 0xb62f8d54
er, hangon
that doesn't quite say it
(gdb) p data
$35 = (void *) 0xb62f8d50
there, difference of 4
OK. problem is clear
solution not. 16:38
might be that the STable or REPR or something needs to have an entry for the offset of the body 16:39
jnthn I'd rather a solution where we force all objects to align their body the same amount off... 16:42
16:48 tgt joined
nwc10 that is potentially wasteful of 4 bytes of memory 16:52
the trick would be to put the body in a union with MVMuint64 and MVMnum64
I think. As one of those would have the 64 bit alignment constraint (on the fun platforms) 16:53
16:53 FROGGS joined
retupmoca jnthn: I got a spesh segfault with one of my precompiled modules; this fixes it: github.com/retupmoca/MoarVM/compar...r...master 16:57
jnthn: but I don't know if that is just pointing to something else being wrong
jnthn That's probably hiding the problem. 16:58
retupmoca that's what I figured
17:01 daxim joined 17:11 ggoebel111115 joined
ggoebel111115 o/ 17:13
FROGGS hi ggoebel111115
dalek arVM: 950f640 | jnthn++ | src/6model/sc.c:
Revert accidental commenting out of code.

mls++ for noticing.
17:16
retupmoca jnthn: with 950f640 my panda starts failing again :) 17:38
jnthn retupmoca: oooh 17:44
That's good.
Now we know the failures aren't to do with spesh 17:45
Can you describe the failure? And how to reproduce it?
'cus it built fine here with that patch...
FROGGS jnthn: that MAST::Local seems to work for labels on nqp-m so far... 17:53
jnthn yay :)
retupmoca jnthn: ./rebootstrap.pl ends up giving me 't/builder.t .... No subtests run' 17:55
jnthn Aww
Here it works reliably.
Very weird. 17:56
What if you try to run the test file manually?
retupmoca before precomp: works fine 17:57
after precomp: segfault
(segfault before any output, in fact) 17:58
jnthn Ah... 17:59
And that null check is what fixed it?
Or at least, worked around it...
retupmoca yeah, looks like this segfaults in the same place my module did 18:00
src/core/interp.c:4292
check is null on that line
jnthn Any chance you could revert 4785424 and try it? 18:02
retupmoca looks like it breaks in the same place 18:05
jnthn hmm 18:06
And I guess reverting 375f5d7 fixes it?
retupmoca apparently not 18:09
although I didn't rebuild nqp/rakudo - just moarvm. idk if that makes a difference 18:10
jnthn huh...that's the exact code the other patch commented out... 18:11
retupmoca moarvm 2360aaa is apparently still causing segfault? maybe I do need to rebuild nqp and rakudo 18:12
jnthn: but I'm going to have to look at it later - out of time for now 18:13
jnthn ok
btyler jnthn: what information do you need re the panda thing? latest moar (with that "cached" line no longer commented out), panda segfaults in the same place as before 18:31
jnthn btyler: Yes, but retupmoca said the commit with it in also did...
So I (a) can't reproduce it myself, and (b) am getting conflicting info about it.
Which doesn't bode too well for a fix.
btyler yikes. ok. I'll put moar back at 2360aaa and rebuild things and try again 18:32
unless there's a different course of action that would yield more useful information 18:33
jnthn That's worth a try. 18:35
btyler moar@2360aaa (without having rebuilt nqp or rakudo) let panda build/test/rebootstrap successfully. it -did- still segfault before I removed the blibs from earlier 18:38
retupmoca: is there a chance that panda still had blibs from moar 950f64 when you tried?
jnthn Well, I did rebootstrap...I thought that rebuild all the things? 18:40
oh, you were asking retupmoca :) 18:41
retupmoca btyler: ooh, it did have blibs from before 18:44
this is what I get for trying to debug at $dayjob
btyler woo, ok, so we're not giving conflicting info anymore, I think 18:49
jnthn ah, ok 18:50
Sadly, a fresh build at HEAD of everything doesn't show the issue 18:51
nwc10 jnthn: why does MoarVM really need the body of all objects to start at the actual same offset? 19:11
(rather than, say, having the offset in the STable)
jnthn nwc10: I'm not sure it really needs it; more that it's making something that can be a constant instead be a variable, which will mean extra dereferences, that something JITting a late-bound REPR call would have to emit code to fetch and add it rather than adding a constant offset, etc. 19:24
nwc10: So it's really a performance/simplicity argument. 19:25
btyler, retupmoca: I can get a SEGV on my Linux VM 19:34
Compiling lib/Panda/Builder.pm to mbc
Segmentation fault
Is that the same place?
19:40 brrt joined
btyler yep, that's the one 19:43
jnthn OK. Teaching and bad sleep has kinda taken it out of me, but but I have a VM just the same on my laptop which is coming with me to plpw/czpw etc :) 19:53
19:56 woolfy joined
dalek arVM/loop_labels: 845a51b | (Tobias Leich)++ | / (6 files):
keep loop label as a MAST::Local

Before it was the memory address from the time the label was attached to the block. Now we are safe when the label gets moved due to the gc.
19:58
btyler jnthn: rest up, eh :) let me know if there's anything else I can do (short of becoming a mega C hacker) to help isolate it 20:00
nwc10 jnthn: OK. The counter is that currently it will burn memory (4 bytes, in quite a few objects) on any 32 bit platform which aligns 64 values to 64 bits 20:02
until such time as the object header drops by 4 bytes again
which I think you said you thought would be possible with a more funky algorithm 20:03
(replacing that union of two 32 bit values with something smaller)
timotimo we just need to find something fun to put into these 4 bytes! 20:10
nwc10 no, we want to put it on a diet to shed 4 bytes, and make everything work better
timotimo hmm 20:11
FROGGS we could put LOVE in those four bytes though 20:12
jnthn So they'd be...love bytes? :P
nwc10 groans
FROGGS MoarVM - Where there is guaranteed LOVE in every object header!
nwc10 also, clearly I should go to bed. I'm not awake enough to spot the feed lines 20:13
timotimo LOVE in utf8 is 4 bytes
FROGGS so LOVE in utf8 is just a four letter word?
20:14 jnap joined
brrt btyler, you can report it on github, maybe other people less overworked than jnthn can pick it up :-) 20:19
e.g. myself :-)
20:38 woolfy joined
brrt offtopic: i'm desparate for a word processor-like text editor for linux that doesn't suck 21:07
for windows is also ok
i'm very near the conclusion that i'll have to write one myself
btyler the biggest yaks need the most shaving 21:08
(come to the vim side, we have cookies, yada yada yada) 21:09
TimToady reimplement libre-office in P6? 21:14
btyler also, there was a really neat write up about the new LLVM-based compiler in safari, using multiple levels of increasingly better code gen (which take longer, and thus are only switched into once code has been proven sufficiently hot). www.webkit.org/blog/3362/introduci...t-ftl-jit/ 21:15
*js compiler
a lot of it went over my head, but some of it sounded familiar based on talk about spesh and such I'd heard here and in #perl6
brrt i use vim :-) and emacs both 21:19
btyler, i'm of course a horrible cynic, but i'm not all that impressed by adding a 4-stage compiler 21:20
i mean, scala has 47 stages or so? :-p
and they indeed seem to be using much the same techniques, except that they do it by wiring yet another backend into their system 21:22
btyler brrt: sure, and I'm not too knowledgeable about scala, but it sounded pretty neat to me given how slippery js is
brrt (which is much more mature than moarvm will do in the forseeable future, though) 21:23
hmm..
personally i think luajit (luajit.org/) is much more impressive than 'lets wire llvm up to our javascriptcore' 21:24
btyler luajit is pretty incredible, yeah
and I say that in almost complete ignorance of its internals, just based on the performance
brrt keep in mind that they had to make quite a few sacrficies to do so - as you said, 4 levels of compilation, but also a conservatice gc, and no less than 4 deoptimizer / stack replacement structures 21:26
(safari flt team, that is)
dalek arVM: 6fd8cd1 | skids++ | src/math/bigintops.c:
Fix bigint bitops. Really they need to be redone by hand because

the libtomath API at too high a level for something for which it was not designed to do.
21:27
arVM: 13a1234 | jonathan++ | src/math/bigintops.c:
Merge pull request #96 from skids/master

Fix bigint bitops.
jnthn 4 deoptimizer/OSR things sounds terrifying. :)
I mean, the MoarVM one so far wasn't too hard to do, but it doesn't have to un-inline yet... 21:28
brrt i'm not /sure/ thats what they're doing, but i believe so, yeah
and i'm assuming they either always go back to an interpreter or to the 'simpler level' - but i'm not sure about that, either
i think that if i were on that team i'd push for better optimization in the existing (3 stage) interpreter rather than adding yet another layer that is known for being incompatible with GC 21:29
TimToady It's turtles all the way back up too!
brrt that made me smile :-)
jnthn :P 21:30
Yeah, well, I chose us precise GC for Moar, so we gotta live with that everywhere now. :)
I guess the advantage of LLVM is it's got a bunch of knowledge about things like instruction scheduling on different architectures, which is tedious to replicate. 21:31
brrt thats true
and well, perhaps about apple's own ARM architecture too? 21:32
jnthn One'd imagine it supports whatever apple care for rather well, given iiuc they've rather heavily funded it.
brrt it's for all practical purposes an apple project today imho 21:34
jnthn brrt: Hm, seems I may have a free moment to scribble my JIT/interp thougths down :) 22:01
brrt ok, cool :-) 22:02
it's been 12 o clock
though
also, rant away anytime you like :-)
jnthn My flight tomorrow is at 11am. :) 22:03
Given I've had to be at teaching by 8:30am for last few days, that's comparatively alright. :)
brrt oh, yes 22:04
swedish start at 8:30?
oh wait, timezone, thats not as weird as it seems 22:05
ot: you know what scientists just love doing? wasting time with broken data 22:07
jnthn Well, 9am start, but need to get coffee, get my demos in order, have time for the train to be late, etc. 22:11
brrt fair enough 22:14
what do you teach these days?
jnthn Was teaching parallel/async programming this week. 22:16
Quite fun :)
brrt i imagine :-)
jnthn Today's exercises including analyzing climate data and querying/plotting earthquake data. :) 22:17
brrt would wish somebody would want to teach my former coworkers about parallel / concurrency
ok, that makes it more fun
jnthn Yeah. Well, of course as soon as you go plot earthquakes, you basically find you drew the ring of fire on a map, dotted the Himalayas, and that you can barely see Japan and Indonesia for the markers. :) 22:18
Somehow on the data set I used I couldn't pick out yellowstone, though. I'd thought it had a lot of little quakes... 22:19
brrt doesn't see how plotting earthquakes would be a parallel operation
jnthn Oh, it's not the plotting.
It's querying the data set.
brrt oh, i see
jnthn Which can be expressed very nicely as a Linq query.
The plotting is just in code I give them. The exercise is really to play with parallel Linq. :) 22:20
But having some shiny visual output of the results makes it more fun.
brrt parallel linq, is that something like an async query? i.e. fire away and get a callback? 22:21
brrt has forgotten what these were called on android
jnthn No
Linq is the thing with all the combinators like Where (grep), Select (map), GroupBy, Join, etc. 22:22
brrt yes, i've seen bits and pieces
lazy evaluation or not?
jnthn And Parallel Linq just lets you opt in to using an implementation that automatically spreads the work over multiple threads.
Linq is lazy
Parallel Linq lets you trade off how much you want lazizness vs throughput 22:23
brrt ok, i see
hmm
jnthn So you give it don't batch / batch a bit / batch as much as you can.
brrt that isn't a very simple tradeoff
jnthn No
You only get to pick 3 points on the scale. 22:24
Bit limiting...
...but in reality often sufficient.
brrt i see
is this much used for webapps? what do people use .net for these days? 22:25
jnthn I see a mix of web apps, backend stuff, web services, and GUI apps. 22:26
Most folks have 2-3 of those, though the odd place I encounter is using it for all 4.
brrt better than php, i guess :-)
jnthn oh, by some way :)
C# is quite a decent language these days. 22:27
brrt (did you see the bit about the init system in php? :-o was that a joke?)
jnthn I...no...what? :)
brrt m0n0.ch/wall/ :-D 22:28
it's real
jnthn wow
brrt C# may be nice but windows still annoys me 22:29
'you can't write this file because some other program has an open handle'
fuuuu
jnthn yes, I hate that aspect of Windows too
And I really don't like where they're going with Win8
brrt 'you can't unlink this file because some other program (or maybe your own program!) has an open handle'
'your unicode is broken because you forgot to set the binary write flag lol' 22:30
no, me neither
otherwise its pretty ok :-D 22:31
jnthn brrt: gist.github.com/jnthn/c1b88756121f0525ff28 22:43
brrt thanks 22:44
jnthn Hope it's vaguely coherent. :) 22:50
brrt i think it is
its kind of hard to see how it 'll all play out
i'm not sure what you mean by: 'Ā and then point the interpreter at a (possibly global) piece of memory that has the single "enter the JITted code" operation.Ā ' 22:51
i agree with the register / memory layout thing, though, completely 22:53
in fact that is imho one of the great advantages of using a register vm
bytecode is already 'three address code', and memory layout has already been made
makes compilation so much simpler since for any value you 'know' where it should go 22:54
what i'm not sure of, is whether we can 'escape' the c stack, or at least stack-like behaviour somewhere when calling directly from jit-code to jit-code 22:56
(that is, without the intervention of the interpreter)
i /think/ the interoperation between jitted and interpreted code is going to give the most trouble, or challenge 22:58
jnthn Does "just consider every call a tail call" or "just consider the JIT as always doing CPS" help, or make things worse? :) 22:59
brrt that helps
jnthn yay :) 23:00
That's the way the interpreter works today, really.
Whenever we call back into it, things are set up so we're ready to run the next instruction in the place we called.
For JIT it's not *quite* that simple, but it can be close: when we want to make a call, we just (in an unoptimized case) call some "invoker" that gives us back a function pointer that we can jump to, making sure tc is in an appropriate register. 23:01
That is, we don't really call JITted code, just goto it. 23:02
And the only call is JIT calling C functions, and the intial interpreter -> JIT transition. 23:03
So falling out of JIT back to interpreter is really a return.
brrt yes, i was that far with it :-)
jnthn OK :)
It feels workable to me, and one of the "easier" approaches - even if a bit unconventional - when deopt and continuations are factored into the picture. 23:04
brrt my mind says you still need a stack to keep all those continuation pointers 23:05
i.e. i 'call/cc' routine a, routine a call/cc's routine b (with its own continuation), where does routine a store routine b's continuation? or is this already taken care of by the magic growing of mvm registers? 23:06
jnthn The stack is the frame ->caller chain. :)
brrt yes
ok
timotimo mhhh, want moar progress 23:07
jnthn: any low hanging fruit you can think of i could tackle tomorrow or something?
jnthn timotimo: Well...it's maybe not low hanging, but looking into the SEGV we have from the thing that makes CORE.setting comp go way faster would be really valuable.
timotimo i don't know what that is :) 23:08
how much faster are we talking here?
jnthn Stage MAST goes from 20s to 12s on my box.
timotimo cute :)
jnthn Yes, but the Panda build SEGVing is not cute. 23:09
timotimo aye
which commits are that?
jnthn Well, it appears that are revert of 375f5d7 helps 23:10
Or failing that, 950f640
Gonna get some rest...shouldn't cut the flight too late tomorrow :) 23:17
'ngiht
*night, even.
timotimo gnite jnthn!
i'm not sure if i'll be able to put a dent into that bug
brrt goodnight 23:20
do we have a gdb backtrace yet?
23:25 brrt left