TimToady o/ 00:03
00:46 ventica joined 01:14 FROGGS__ joined 02:57 ventica1 joined 05:27 lizmat joined
sergot hi \o 06:15
06:37 woolfy joined 07:03 FROGGS[mobile] joined
FROGGS[mobile] jnthn++ 07:05
ventica1 Hi all... I'm interested in contributing to MoarVM... what are "biggest needs" (links/pointers to reading are welcome)? I'm most interested in working on implementing unimplemented or only-plumbed-in features. My background is computer hardware (CPU microcoding and CPU core validation)... I'm very comfortable with parallelism, threading, events, etc. 07:08
computer hardware engineering* 07:10
07:12 FROGGS[mobile] joined 07:14 zakharyas joined
FROGGS__ hi ventica1 07:16
ventica1: here is a roadmap with bigger things to implement: moarvm.org/roadmap.html 07:17
ventica1: but there are many many little things that Perl 6 (rakudo) needs, which want to be supported by MoarVM directly 07:18
ventica1: here you can see missing features of Perl 6: perl6.org/compilers/features
err, rakudo
hatseflats I'm looking for some info as well, I was introduced to MoarVM by a friend who claimed that the meta object model is crafted from 3 'super'baseclasses, but I can't seem to find proof of this in the code, am I blind? 07:21
He mentioned Cool, Mu, and another class, but I can't seem to find references to this setup
ventica1 FROGGS: Thx, I'll diff those two lists... :) 07:22
TimToady Cool, Mu, and (probably Any) are not metaclasses, just base classes; metaclasses would be more like ClassHOW, RoleHOW, or fundamentally, KnowHOW 07:23
m: say 42.^mro 07:24
camelia rakudo-moar 6504db: OUTPUTĀ«(Int) (Cool) (Any) (Mu)ā¤Ā»
TimToady those are the parents of an Int
m: say 42.HOW 07:25
camelia rakudo-moar 6504db: OUTPUTĀ«Perl6::Metamodel::ClassHOW.new()ā¤Ā»
TimToady that's a metaclass object
hatseflats right, mixed terminology is one of the reasons I'd like to read up on the subject(s), is there any spec available?
TimToady the P6 specs talk primarily about base classes, and tend to hide everything behind the "HOW" wall. I'm sure there's some 6model docs lying about somewhere though that describe the HOW stuff 07:26
ventica1 concurrency looks interesting... and looks like Moar still needs a bit of work on that? 07:27
TimToady sure, it's still a work in progress, and has bugs, and doubtless misfeatures in the specs even
hatseflats 'kay, I'll try digging a bit further then :)
TimToady have the appropriate amount of fun! :) 07:28
FROGGS__ here are docs about 6model implemented in nqp/rakudo: github.com/perl6/nqp/tree/master/docs/6model
ventica1 o/
FROGGS__ ventica1: it has concurrency support, which is "almost" stable 07:29
hatseflats Juerd must've mentioned the two in passing without mentioning what 6model really was a part of, thanks FROGGS__
ventica1 Nice. So, in your estimation, what's the biggest "hole"? I know that's a subjective question but I'm out of the loop and just trying to get a feel of where my efforts could be best placed 07:31
TimToady we're very careful to keep our class hierarchy independent of our metamodel, and both of those independent of the representational polymorphism model
ventica1 FROGGS: I see "asynchronous file I/O" listed as due 8/2014 in the first list you sent me 07:32
FROGGS__ ventica1: I think feature wise shaped arrays/hashes might be very worth it, but multi threaded hypers would be awesome too
TimToady was just gonna mention them
the hypers 07:33
probably needs the compact shaped array support before we can feed the hypers to your GPU though... 07:34
hatseflats I don't really understand what a representational polymorphism model means, what makes it representational?
FROGGS__ hypers are an awesome primitive to do something in parallel without getting insane starting and joining threads
TimToady not even the class knows how its attributes are laid out in memory, basically; could be stored in a hash, or in a C struct, or whatever 07:35
FROGGS__ hatseflats: I think what is meant here is that you have representations, that define how an object is layed out in memory
hatseflats ah, in that sense, I see
TimToady basically, instead of doing like most dynamic languages and force conversion of foreign objects into the representation model of the language 07:36
FROGGS__ like an Array in Perl 6 can be of repr CArray[uint8], which means that its elements are really just bytes in memory in a row
TimToady we abstract that away, so a P6 object could, in fact, simply be a mapping of a C+ object, say
C++
or Java, or whatever
FROGGS__ so an $carray_of_uint8[3] would read the fourth byte of that memory area
for the normal Array type you would have a list of pointers to Perl 6 objects that again known how they are layed out in memory 07:37
TimToady basically, the repr writes your accessors for you so you don't have to care
FROGGS__ yes 07:38
and it works very well :o)
for example, you can cast a C structure to an Perl 6 object just using the object's class definition
hatseflats that sounds really clever 07:41
FROGGS__ here is the positional access of an CArray implemented btw: github.com/MoarVM/MoarVM/blob/mast...ray.c#L218
ventica1 FROGGS: Cool... I'll go read up on those. Any hints/links/docs?
07:41 brrt joined
hatseflats clever in the good sense that is, not 'clever code' is stupid 07:42
FROGGS__ S03:4173
synopsebot Link: perlcabal.org/syn/S03.html#line_4173
TimToady has wanted representational polymorphism ever since he first hacked expat into P5 to support XML.
FROGGS__ ventica1: see that link
07:43 klaas-janstol joined
ventica1 thx 07:43
brrt \o
FROGGS__ but in short: @a Ā»+Ā« @b will add @a[0]+@b[0] and all other elements in parallel (in any order), and return a list with the original order
brrt btw, welcome hatseflats, ventica1 :-) 07:44
FROGGS__ * where in parallel depends on the machine, if you cannot do MT, you will only have on thread of course
brrt truly wonders if that will ever give an advantage aside from specialised (GPU) solutions
FROGGS__ but if we can make use of CUDA on some platforms, that would be awesome of course
xiaomiao s/cuda/opencl/ ;) 07:45
ventica1 I vaguely remeber watching a talk on YT TimToady gave on the hypers
remember*
hatseflats brrt: thanks, unlike many other channels, I feel welcome as well :)
FROGGS__ brrt: I had a prototype of Ā»+Ā« on parrot, and it was like 3.5 times faster
brrt truly? i've implemented the same idea in go once, and it was 2x slower :-) 07:46
that may have been go's lousy locking, though, i don't know about that
ventica1 brrt: o/
brrt also, i
FROGGS__ brrt: but that involved an array of native ints, so I could just add them without many dispatch
brrt i understand, but i'd think locking overhead would be too large 07:47
of course, that may still have had more to do with my specific solution
(all this not to say that it isn't very cool, which it is) 07:50
TimToady well, of course it's gonna depend on the granularity of your memory model, but for compact storage I could see just dividing the work into lock-free regions that are far apart enough not to need interlocks 07:52
maybe with a second pass to fill in the inbetweens that were necessary to cushion the first pass
brrt tfair enough, but that looks like more work than it seems at first sight :-) 07:53
FROGGS__ brrt: here was my attempt: gist.github.com/FROGGS/6127100 07:54
TimToady Our aim is to torment the implementors on behalf of the users. :_
:)
FROGGS__ I am sure there was a parrot or nqp issue, but I can't find it
TimToady parallelizing lists is a different matter, of course; you need some kind of divide-and-conquer representation 07:55
brrt mumbles about being happy not to implement a JIT for parrot 07:56
TimToady so Lispy cons lists are really bad that way
brrt that depends on if you can update a head pointer atomically and cheaply
no, that's not right
TimToady that's why P6 doesn't specify linked lists for its basic list type
brrt scrap that 07:57
ventica1 so, does Moar have a vector representation for lists, internally? Or a flex representation between vector and LL?
brrt ok, i'm going to figure out why windows build are broken
ventica1 or just LL only?
brrt well... good question
let's see
it's an array 07:58
FROGGS__ ahh, also here github.com/rakudo/rakudo/commit/71...4594748af8
TimToady currently basically a vector of generators, since they're lazy by default, but reified storage is array, yes 07:59
FROGGS__ though, there was a nice discussion that is probably lost :(
brrt ventica1: github.com/MoarVM/MoarVM/blob/mast...MVMArray.h
TimToady we need to refactor our demand model to better communicate to a list what its contextual policy is, lazy, eager, hyper, race 08:00
ventica1 so, then we can assert that a list needs to be reified into an array before getting divided up for use by a hyper operator? (forgive me for n00b questions....)
brrt fun stuff FROGGS__
TimToady that's what hyper should be able to do, but we don't have it yet
pmichaud++ was gonna do that refactor, but real life intervened 08:01
ventica1 brrt: Thx - n00b q... diff between int and num?
brrt i can actually imagine a list with a different type of repr associated, but i wouldn't know how that should be done
num is floating point, int is integer :-)
ventica1 ah, yes, thx
TimToady and the lowercase ones are unboxed natives, while the uppercase ones are boxed 08:02
FROGGS__ boxed and not machine size limited
TimToady well, except the denominator of Rat
ventica1 REPR is representation? 08:03
TimToady which is Int / uint64 basically 08:04
yes
ventica1 thx
TimToady: OT question... how do u feel about Perl adoption/retention in the broadest possible sense? I know that's a Big Hairy Question but just curious about ur feelings on it. Just reading the comp news, one can get the feeling that Py/Rb/flavor-of-the-month are the future and Perl is falling into disuse. I know that's not true but just curious of your 1/2-sentence feelings on it. 08:07
1-or-2 sentence*
TimToady We've said all along that it will take as long as it takes, and we're not gonna rush it. Perl 6 will certainly succeed on its merits over the long term, I believe. 08:09
ventica1 cool... I agree with that assessment 08:10
TimToady Over the short term, people will say the darnest things. :)
ventica1 lol - and Perl 5? 08:11
jnthn morning, #moarvm
ventica1 o/
TimToady Perl 5 has already succeeded on its merits, and continues to acquire more of 'em. :)
ventica1 :)
TimToady it will never, however, be able to mutate into Perl 6 without a massive number of (presumably unacceptable) deprecation cycles 08:12
brrt ventica1: i should add that this is mostly a 'what are people talking about' kind of trend than 'what is actually used' one 08:13
ventica1 we'll just write a pure-Perl 6 version of Perl 5 and deprecate the old code base... haha jk
brrt i.e. there are still very effective companies working with perl5 08:14
ventica1 brrt: indeed... but sometimes it's nice to hear from the ppl involved in order to cut thru all the noise
TimToady well, that's what v5 is, which you should ask FROGGS__++ about
ventica1 not always easy to know what to believe
aha
so there's a use v5;?
TimToady what to believe is what you yourself are willing to accomplish; as they say, the best way to predict the future is to invent it :)
brrt well, you can believe that whatsapp, which used (among other things) perl internally, was acquired for... a number that was so high that i can't recall, but postfixed by a dollar sign 08:15
ventica1 haha yeah that buyout was insane
brrt and yes, FROGSS__++ is working on that indeed
but they were using perl
(i recently had the same discussion of my former manager)
with my former manager
ventica1 my experience is that corp. mgmt mostly doesn't comprehend the Perl concept at all 08:16
brrt duckduckgo, the darling of paranoid hackers everywhere, is using perl
ventica1 It's "just a script"
TimToady m: use v5; $_ = "a b c"; /(A B C)/i; print $1
camelia rakudo-moar 6504db: OUTPUTĀ«===SORRY!===ā¤Could not find Perl5 in any of: /home/p6eval/rakudo-inst-2/languages/perl6/lib, /home/p6eval/rakudo-inst-2/languages/perl6ā¤Ā»
TimToady aww
ventica1 I cud never beat it into their heads that, yes, it's "REAL CODE" (R)
brrt is rebooting 08:17
ventica1 Or, as Robert Anton Wilson dubbed them, the Mgt
not to insult people of smaller stature 08:18
TimToady well, this mgt should probably rest his eyeballs, now that the smart people are coming back online :)
zzz & 08:19
jnthn 'night, TimToady 08:22
ventica1 ok another n00b q... i'm seeing function def'ns in MVMArray.c but no corresponding declarations in MVMArray.h... am I blind or should I be looking somewhere else? 08:23
jnthn ventica1: Near the bottom of that file, you'll see the functions all get shoved into a table. 08:24
ventica1 oh my, I have never seen anything like that before... amazing 08:25
FROGGS__ ventica1: yes, there is a 'use v5' but I am rewriting it right now, so that it is installable using panda, our module installer
jnthn When interpreting, representation stuff is late-bound.
Well, when naively interpreting :)
ventica1 so... if I'm an array and I want to say "How am I represented right now?" what call do I make to answer that question?
jnthn At a Perl 6 level, .REPR. Though note Array there is actually a rather more complex object that has a reified part and an unevaluated part, to support laziness. 08:26
ventica1 FROGGS: Nice... I think that will be key to wider Perl 6 adoption when it gets to that point 08:27
jnthn A Blob is lower level, however.
m: say "abc".encode('utf-8').REPR
camelia rakudo-moar 6504db: OUTPUTĀ«VMArrayā¤Ā»
ventica1 jnthn: ok, so, I think I kind of understand what u mean about the reified thing... but this was raised in the context of me trying to understand hypers (since this is an area needing development work), and my understanding is that we would basically need to convert the list into a reified form for use by the hyper op 08:29
jnthn On areas to contribute, async file IO may be an easy-ish on-ramp, since it's mostly adding code that looks a good bit like stuff that already exists. In fact, part of it may be factoring things out.
ventica1 So, just trying to wrap my head around that... seems like the first step in the flow would be "How am I represented right now? Am I reified or not? If not, get myself into reified representation in preparation for passing to hyper op"
jnthn: OK, that sounds like a good match to me, too... I'm fairly comfortable with asynch stuff 08:30
jnthn On reification, though, it's already triggered by the implementation of hyper ops, which before they do any work check the number of elements on each side, which forces the reification. 08:33
08:33 brrt joined
ventica1 ah, that makes sense, link reification to a size-check 08:34
jnthn We implement a lot of Perl 6 in Perl 6, and try to only create abstractions in the VM where they are needed or where they offer significant performance benefits.
ventica1 so, is asycn i/o in /src/core ?
jnthn I think so far as hypers go, for non-native data types, we can just use existing threadpool infrastructure to do the work division and handle it in Perl 6 code. 08:35
For native types, however, it's probably worth pushing it down to the VM in so far as we can take advantage of SIMD style things.
ventica1 So, rakudo must then have some kind of Perl 6 core running under the hood?
jnthn Yes and no.
The built-ins are just written in Perl 6 itself, with an initial bit of wiring together. 08:36
The compiler is written in NQP, which is a small, more optimization, dialect of Perl 6.
brrt i wonder if the JIT can be a part of the repr's one day
jnthn uh, more easily optimization
brrt: Well, REPRs already can do spesh things
brrt: But it'd occurred to me that the JIT could de-virtualize a load of REPR calls. 08:37
Not an immediate priority, but worth it at some point.
brrt interesting 08:38
visual inspection, btw, doesn't show anything obviously wrong with the win64 code, so i suspect i'll have to arrange some kind of VM
jnthn ventica1: Anyways, async sockets are implemented so far. There's some code in src/core/IO/Socket/Async.pm that is the Perl 6 level API. It uses various nqp::op things which are an abstraction layer over the VM.
brrt or reinstall the hard disk containing windows
jnthn ventica1: The actual I/O itself is handled by Moar. src/io/asyncsocket.c contains the heart of it. 08:39
It's using libuv, and the VM runs an event loop thread which it uses to service async I/O requests. 08:40
ventica1 who enforces ordering? libuv or Moar? 08:41
jnthn Can you be more precise about ordering? 08:42
ventica1 i.e. two reads to same I/O or two write to same I/O
jnthn Ah
ventica1 or i guess OS maybe 08:43
jnthn There's two things at play there. The first is that IO handle operations are protected by locks. That handles two threads trying to start an operation on the same I/O handle at the same time.
The second part is that for async things we have a single event loop thread; since it's a single thread, it's only ever processing one notification at a time. 08:44
Typically it does the immediate things it needs to to get the stuff from libuv, and drops a notification into a concurrent queue, where one of the Perl 6 thread pool threads will pick it up and get to work. 08:45
(And if there's threads with nothing to do, they'll be waiting on a condvar, and one will be signalled)
ventica1 wow... so the request can potentially move up a level (to Perl 6) and back down to the VM (once the thread is assigned)? 08:46
jnthn I'm talking about callbacks or completion notifications there. 08:47
ventica1 ok
jnthn So there's no movement back up to the VM
Unless the code wants it
We send a "message" in the queue containing the Perl 6 callback to run along with the data. 08:48
So the thread that ends up with the work has all it needs.
ventica1 reading up on libuv 08:50
brrt anybody has experience with running MSVC on wine? 09:05
well, that doesn't work, so i don't think anybody has
FROGGS__ no, I dont think that is a good idea :o) 09:09
ventica1 sounds painful in the extreme 09:10
FROGGS__ install virtualbox + windows 7 + msvc express -> done (sort of) 09:12
you need the windows sdk's of course
and activeperl
brrt uhuh... ok, will do that 09:13
but that means i'll have to get a windows 7 image fist 09:14
first
jnthn brrt: I'll have a bit more time to look into the reason it explodes on Windows this evening, if you'd rather work on other bits... 09:15
brrt hmm... other bits are easier
:-)
if you would, i'd be happy 09:16
jnthn I wanted to look into it last night, but fixing the OSR lexicals occasional-bug took a while.
brrt it might be something really silly (i hope it is)
09:16 zakharyas joined
brrt i have enough on my mind wrt to extops 09:16
jnthn extops and OSR are the two things that really block moar-jit helping the benchmarks.
brrt ok, well, i'll need to know a bit more about OSR before i know what to do with that 09:17
jnthn Sure, just ask when you're ready. 09:18
I think an initial step may be marking out which extops are JIT-friendly. 09:19
brrt non-invokish ops are most friendly 09:20
jumpish ops are least friendly
also, throwning stuff, we should handle that some day, to
o
ventica1 is there a test that I can run to exercise the async socket code? 09:21
jnthn t/spec/S32-io/IO-Socket-Async.t 09:22
(If you didn't "make spectest" in Rakudo yet, it may not have been fetched) 09:26
ventica1 yeah, i haven't built anything at all yet, still dizzy and its bedtime 09:27
jnthn Ah :) 09:29
Well, rest well, and hope all of this doesn't seem like a bad dream in the morning. :)
ventica1 so, i guess what I'm looking for is some kind of enumeration of the kinds of things that go out through asyncsock i/o 09:32
in or out
is there a libuv func or funcs that all asyncsoc i/o reqs go thru? 09:33
or maybe that's the wrong q
anyway, afk for zzz... will attempt a build + test run in the next couple days 09:35
jnthn Well, an async socket write is an easy one to consider. An async write op is done on the handle; this in turn places the work into the async tasks queue. The thread running the libuv event loop (eventloop.c) receives this, issues the request with libuv. Later, libuv comes back with success or error. We package up a message and put it into a queue supplied with the initial request, where a Perl 6 worker thread handles it. 09:36
OK, sleep well :)
brrt ok, seems doable 09:37
(the OSR stuff)
basically, in MVM_spesh_osr_finalize, care has already been taken of inlining 09:38
a spesh cand already exists from which i can hang the jitcode 09:40
what i need to do is map the osr finalization 'deopt index' to a label in the JIT
and update the bytecode to the magic bytecode (that invokes the JIT)
and... i think i'll be golden, then
basically the same game as in invoking stuff in the first place :-) 09:41
jnthn I think you'll already have the jitcode even
'cus specialize triggers producing that iirc
Note that the OSR 'deopt index' in question is actually marked out specially 09:42
MVM_SPESH_ANN_DEOPT_OSR
brrt oh, you're quite right 09:43
osr doesn't kick in early, does it 10:04
jnthn "kick in early"? 10:10
brrt it takes 99 iterations before OSR starts logging :-) 10:11
jnthn Right
Oh, and I guess the number somewhere is 100, and we < it, or something :) 10:12
brrt i guess so :-) 10:13
what are characterististics of OSR deopt points?
jnthn Well, for one they are never used as part of deopt :) 10:16
They use the same "remember the offset" logic, though. 10:17
brrt ok
jnthn So they have an entry in the familiar deopts table
It's just that we do the lookup "backwards" to the usual use of the table.
brrt so basically, any instruction may be a osr deopt point?
i see 10:18
jnthn osrpoint instructions exist in the unoptimized code.
These are turned in the logging phase into sp_osrfinalize or so
And then totally removed (and thus the annotation moved) in the final phase.
brrt what about deopt inline? are they ever used as 'real' deopt points 10:19
... oh
that's an annoyance
jnthn What is? 10:20
The annotation is still there on the target place to jump into the bytecode
brrt well, if they're removed, then how will i know (at jit level) where i should push the label 10:21
jnthn It's just that it's on the instruction in the optimized code to go to
brrt ok, so the annotation is moved?
oooooh
jnthn Right
brrt no problem for me then
jnthn "and thus the annotation moved" :)
brrt i see
jnthn But yeah, deleting instructions shuffles annotations.
MVM_spesh_manipulate_delete_ins does it
brrt should look for his glasses embarassed
ok, so ehm... 10:23
the annotation is moved to the next instruction, so the label should be inserted before the instruction 10:24
i can do tht
jnthn *nod*
brrt hmmwait 10:33
lunch& 10:40
11:35 brrt joined
brrt can there be multiple osr finalization points? 11:49
jnthn Potentially; nested loops could cause that. 11:51
11:52 carlin joined
jnthn Yeah, seems log.c just translates them all 11:53
brrt hmm ok 12:02
then my current approach is simplistic
no matter 12:03
that can be fixed
dalek arVM/moar-jit: 2f37da4 | (Bart Wiegmans)++ | src/ (10 files):
Initial support for OSR

This supports OSR for those frames that only have a single OSR deopt point. That isn't true of all frames, so it needs some fixing to support multiple OS depot points.
12:09
12:11 carlin joined
jnthn wonders if that helps with a simple NQP benchmark like while_empty 12:11
brrt probably, yes 12:15
it will probably also break on some benchmarks
errands&
12:21 jnap joined 12:42 brrt joined 13:09 brrt joined
brrt errands made me lose my concentration 13:19
jnthn aww
brrt allow me to rubbeduck :-) 13:20
if i need to support multiple osr points, i need to find the right label to jump into 13:21
i find the right label on basis of the current position of the interpreter
jnthn Could do it with a jump table mechanism, and pass in an index? 13:22
brrt well, yes, that's pretty much what i'm doing, except that i pass the pointer to the jit code segment directly so as to avoid annoying lookups and being bound to the table
jnthn ah, OK 13:23
That works too
brrt e.g. invokish ops aren't normally associated with a label, but i can use a 'local label' and load this directly into the jit reentry address 13:25
and for the one-osr-point csae, i used a global label, which is stored in the global_labels array that dasm use and i had to setup 13:26
but for multiple osr points, i need to use dynamic labels, which are allocated in a different array 13:27
so what i think i'd need to do is associate the current offset with the entry label
and make sure i output the correct /dynamic/ label at the correct osr point 13:28
which means that this association should already exist at emitting-time
13:30 FROGGS[mobile] joined
jnthn I guess however you do it, you'll need to have some analysis on this ahead of emitting... 13:30
brrt indeed 13:33
13:59 Ven joined
brrt is there a way to know ahead of time how many osr points there are? 14:06
jnthn Conut them? 14:07
brrt ugh... yes i guess so
jnthn But aside from that, not really...
brrt ok, that's just how it will be then
jnthn Are you not already making a pass through the graph for some other things, anyway? Maybe not... :)
brrt no, i'm not doing that now 14:08
graph creation is just a single pass
linear too
(yes, very very very boring :-) but i kind-of like it that way
jnthn I like simple things :) 14:10
brrt it's enough for now 14:11
it may not be enough in the future, but who can tell? 14:12
14:27 btyler joined
brrt oh, i know a line of c that is just evil 14:41
brrt hopes you can find it in my next commit 14:43
jnthn Did I write it? :)
hah
:P
14:48 jnap joined
brrt no, i've just written it 14:49
and yes, you should mock it 14:54
jnthn wonders if he'll also find a bug in it :P 14:56
timotimo is annoyed the page-down button isn't bringing him to the spot where moar-jit works for everything
jnthn timotimo: I think it'll need using the other buttons quite a bit... :P 14:57
timotimo brrt: you probably already answered this before, but: is eliminating loads and stores between all instructions still doable for this summer? if not, does it have to wait a year for the next GSoC? :)
brrt ehm.... well.... the ambitious version of that, will have to wait a bit i'm afraid
not-so-ambitious versions could be done, but there's still plenty to do before i get to that 14:58
timotimo OK
brrt it is my ambition to patch dynasm still this summer
but moar has priority
timotimo well, it'll already be a big help if we can turn a simple "store This Here, load That Here" into a "use This instead of That in the following instruction" 14:59
brrt hmm
you can (on the moarvm level) work on eliminating sets, and that would certainly help 15:00
basically, it depends on how complex you want to get it
timotimo i've tried that once before and it blew up in an "interesting" fashion
jnthn None of it is interesting until the JIT is handling typical NQP and Perl 6 code :)
brrt such true
timotimo er ... of course
jnthn timotimo: heh, I have a patch that tries to kill sets too, and it also exploded things :S
brrt i think i handle quite a bit of typical nqp already, but that may be my arrogance 15:01
well, there's a bug for sure :-)
ah, that's my second most evil line that blows up 15:02
hmm.. i see 15:06
timotimo well, code that only does things like access object properties, deciding things with boolean logic, printing to stdout ... those certainly are jittable right now :) 15:07
(not actually serious; i didn't look at bail reports in a long while) 15:10
brrt hey hey :-) 15:18
we can invoke, we can handle invokish operators, loop, and now it seems we can OSR
brrt realises there is still a lot to do indeed :-( 15:20
TimToady there's been a lot to do for 14 years now, but we still keep doing it... :) 15:21
brrt tests passed, time to commit
timotimo i don't know what it is, but there seems to be something that keeps us going
oh yay, the hunt for the evil line of c is on!
TimToady wonders if the second-most-evil line is still evil... 15:22
dalek arVM/moar-jit: 9822cde | (Bart Wiegmans)++ | / (9 files):
Support multiple OSR entry points

Use the current bytecode offset to find the correct label to jump to, which are associated at compile time by use of the spesh annotation
15:26
brrt second most evil line is no longer so evil
timotimo brrt: is the osr_offsets sorted? you should use a binary search there!!!k 15:27
the evil line seems to have a comment pointing out what it is 15:28
jnthn It's unlikely to have more than 2 or 3 entries :P
brrt osr_offsets ehm.. i don't know that, i process the spesh graph in linear time
linear order 15:29
ugh
15:29 woolfy left
timotimo well, time to process the spesh graph in logarithmic order insteak 15:29
instead!*
i applaud your use of *= in the last few characters of a line
jnthn I'm guessing something gets in the way of not putting the code in a test-osr sub? 15:32
(e.g. the mainline contains an op we don't JIT yet?)
brrt the OSR needs a caller 15:35
i'm not sure if the mainline has that?
(so that's why i didn't put it in mainline, btw)
in other words, we seem to have OSR now \o/ 15:36
brrt dinner &
15:36 brrt left
jnthn Yeah, it does have a caller 15:37
Otherwise OSR'd not help the benchmarks, and it does... :)
16:08 lizmat joined 17:10 zakharyas joined 18:26 brrt joined 18:30 Ven joined
Ven recompiles to get masak's AST.Str 18:31
Missing or wrong version of dependency 'gen/moar/stage2/QRegex.nqp' 18:39
I did git pull && perl Configure.pl --gen-nqp --gen-moar --backends=moar && make
brrt hmmm 18:40
i suppose it still has the old ones installed?
brrt has no idea about how --gen-nqp and --gen-moar are implemented 18:41
Ven makes clean
jnthn Was gonna say, was make install done also... 18:42
vendethiel oh yeah, forgot to put it in the list. it was
18:42 carlin joined
Ven works, nice :-) 18:43
brrt is getting a bit more excited about perl6 every day 18:52
Ven is getting a bit more excited about the JIT every day :-) 18:55
19:02 ventica_desktop joined
ventica_desktop should I clone MoarVM repo or rakudo? 19:03
jnthn If you clone Rakudo one you can --gen-moar to Configure.pl and it'll go grab the dependent repos 19:04
ventica_desktop ok 19:05
jnthn brrt: Did you get chance to look into how we bail on the mainline of an NQP program, btw? :) 19:21
brrt i'm not sure that we do
but i'll see about it
frankly, it never seems to reach the JIT graph creation at all 19:22
jnthn Hmm 19:23
tbh I didn't even check it for NQP, just for Rakudo
But they both compile to nqp::while
brrt nqp::while? 19:24
jnthn QAST::Op.new( :op('while'), ... ) 19:25
19:25 lizmat joined
jnthn Is how you'd normally see it 19:25
But
m: nqp::while(1, say 'omg')
camelia rakudo-moar b20535: OUTPUTĀ«(timeout)omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤omgā¤oā€¦Ā»
jnthn Actually works
brrt awesome
but there's no op called while 19:26
jnthn Right, this is nqp:: op level
What I was getting at (badly :)) is that it's the code-gen from nqp::while that inserts osrpoint 19:27
So it should be working just the same between NQP and Rakudo.
brrt i see
but ehm, in my osr.nqp, MVM_spesh_osr_finalize is never called 19:28
jnthn Oh? 19:29
brrt that is, without the containing osr-test sub
it is called when i have put it in there
funny, because MVM_spesh_osr is called, though
(fprintf debugging ftw) 19:30
jnthn Weird
brrt yes
wait 19:34
has caller, does not have interned callsite 19:36
jnthn Ah
brrt might we be able to change that to has_caller /or/ has interned callsite
19:52 Ven joined
jnthn I'll have a little Perl 6 / MoarVM time in a little bit; will look at the bug queue unless somebody has something more pressing for me :) 20:41
20:43 lizmat joined, lizmat_ joined
Ven mmh, we can't very well give a compile-time error for a missing mandatory named, can we ? 20:45
jnthn On a sub perhaps yes 20:46
On a method, no
Ven and what about too many positionals, maybe ? 20:47
(yes, all on a sub)
jnthn m: sub foo($a) { }; foo(1, 2) 20:48
camelia rakudo-moar b20535: OUTPUTĀ«===SORRY!=== Error while compiling /tmp/oikepEaKONā¤Calling 'foo' will never work with argument types (Int, Int)ā¤ Expected: :(Any $a)ā¤at /tmp/oikepEaKON:1ā¤------> sub foo($a) { }; āfoo(1, 2)ā¤Ā»
jnthn Already do that :)
Ven m: sub c(:$c = 5) { $c }; sub { c(3) } # that I mean
camelia ( no output )
jnthn The analyzer bails at present if it sees there's named params 20:49
Ven oh, alright =)
jnthn It's actually trying to prove properties that will let it compile-time resolve multi dispatches, or do inlinings.
And the errors are driven when it proves the call can never work. 20:50
*given
21:03 brrt left
jnthn I'm suspecting that github.com/MoarVM/MoarVM/issues/114 also relates to the #116 fix I did yesterday 21:25
jnthn leaves a note to say so 21:27
On #111 too 21:28
Less sure about that one, but it seems very viable 21:29
lizmat jnthn: if you needed to describe spesh in 1-2 lines, what would it be? 21:41
FROGGS__ jnthn: I'll retest the tickets I've created tomorrow 21:42
timotimo lizmat: since most code that is flexible/polymorphic at the time of writing ends up being static/monomorphic at run-time, it's helpful to optimistically re-write your bytecode to assume staticness and bail out if it turns out not to be the case. That's what spesh does. ā† my attempt 21:46
lizmat timotimo++ :-) 21:47
jnthn That's not bad :)
timotimo i've learnt from the best :3
jnthn Spesh takes code that is highly dynamic, with a lot of late binding and polymorphism, and - based on the actual types that show up at runtime - generates specialized versions of the code that do away with the costly late-binding. 21:49
lizmat jnthn++
jnthn (Which works well because, as timotimo++ said, most potentially polymorphic code is monomorphic)
lizmat I will use that 21:50
jnthn The "bail out" bit is kinda worth mentioning too perhaps.
The fact that Moar can deoptimize is as important as its ability to optimize, in many senses. :)
timotimo we've seen a few times how not being able to properly deoptimize can affect the behavior of a program 21:52
jnthn brrt: So far, the Windows JIT failures are gone with JIT disabled, so it's certainly a JIT-related issue. OSR and inline being disabled don't affect things.
brrt: So seems quite confined to the JIT.
It *looks* like we end up storing bogus stuff in an attribute slot. 21:53
The segv happens if we GC mark it
But other times we get this:
P6opaque: invalid native binding to object attribute
Well, too tired to nail this tonight...sleep & 22:51
FROGGS__ gnight 22:52
23:12 lizmat joined
lizmat fwiw, I just found out that you cannot install MoarVM on a Mac (and possibly other machines) 23:21
when you have a space in the names of any of the directories in which you're compiling
Piers traced this back to a problem in the makefile of dyncall 23:22
where there is apparently a call to "dirname" without quoting the parameter
what would be the best way to fix this?
afk&
TimToady destroy all macs?
lizmat could be that it is the same on other OSes 23:23
TimToady destroy all mac users? 23:24
oh wait, windows users also use spacs... 23:25
lizmat spacey creatures :-)
TimToady space, the final frontier... 23:28
lizmat :-) 23:33
afk&