01:44 MasterDuke joined
timotimo oh holy wow 01:46
===> Install [FAIL] for zef:ver<0.1.32>:auth<github:ugexe>: MoarVM panic: threads list corrupted 01:47
it's the 2017.07 rakudo, though, so don't panic
ugexe thats during precomp when installing 01:48
well im assuming it made it to precomp. its inside CURI.install(...) technically 01:49
timotimo yeah, but holy wow, that's a memory corruption or something. i've never seen this message come up before 01:51
timotimo goes to bed 01:52
MasterDuke timotimo: what would be a good benchmark for my fsa-for-string-storage branch?
it now builds nqp and rakudo
doh, questions for tomorrow it seems 01:54
samcv MasterDuke: i wish i could answer that question 01:55
but i would compare times for spectest since i have no better answer
MasterDuke hm...maybe something like `my @chars = "large_file".IO.comb; my @other-chars = @chars.map({ $_ ~ rand.Str })` ? 01:59
that should create and manipulate a bunch of small string, which i think would benefit the most from using the FSA 02:00
samcv i'd still like to know about how it affects things generally
MasterDuke yeah, but that's a much longer benchmark to run! 02:03
huh. that code takes the same amount of time on the branch as on master, but uses 1.23g ram instead of 1.21g 02:10
02:56 ilbot3 joined
Geth MoarVM: 44e3cbe2a4 | (Samantha McVey)++ | docs/ChangeLog
Better clarify and organize the ChangeLog
05:02
samcv nine: you around? two changelog entries of yours are the last ones left and i'm not sure since the messages conflict with each other what the end result was
nine: here github.com/MoarVM/MoarVM/blob/44e3...og#L41-L42
or if somebody else can answer that for me you can go ahead and change it if you have commit rights 05:03
or just let me know
06:25 evalable6 joined 06:26 bisectable6 joined, unicodable6 joined, releasable6 joined, Geth__ joined 07:14 domidumont joined 07:20 brrt joined
brrt good * #moarvm 07:20
07:20 domidumont joined
Geth MoarVM: 13b45564ef | (Stefan Seifert)++ | docs/ChangeLog
Unconfuse ChangeLog

The first commit was a quick fix to exclude an unhandled case while the second commit brought code to actually handle that case correctly.
07:31
nine samcv: fixed 07:32
samcv awesome thanks :)
brrt hey #moarvm 07:38
just as a random idea
maybe we can rename the JIT to the 'machine code backend' or 'mcb'
that would give a much better indication of what it actually does 07:39
or at least, it would communicate it better 07:41
samcv hm
backend sounds kind of vague though
maybe something fancier sounding 07:42
08:05 domidumont joined 08:19 brrt joined
nine always thought "JIT" fits very well 08:19
brrt but, but, but 08:20
it causes confusion
or at least i feel like it does
nine In which way?
nwc10 well, it's "native code generator" more than anything else that's been suggested 08:21
(I think)
nine "In computing, just-in-time (JIT) compilation, also known as dynamic translation, is a way of executing computer code that involves compilation during execution of a program ā€“ at run time ā€“ rather than prior to execution.[1] Most often this consists of source code or more commonly bytecode translation to machine code, which is then executed directly." 08:23
en.wikipedia.org/wiki/Just-in-time_compilation 08:24
samcv ok i'm going to release now 08:51
Geth MoarVM: 8eaba4ac8d | (Samantha McVey)++ | docs/ChangeLog
Make a few minor changes to ChangeLog before release
MoarVM: 8c6f97ea8e | (Samantha McVey)++ | docs/release_guide.md
Add step to release guide to check tests with NFG_DEBUG set
MoarVM: 23afb9308e | (Samantha McVey)++ | VERSION
Bump version to 2017.11
samcv release done 08:52
09:03 zakharyas joined 09:33 zakharyas joined 09:40 domidumont1 joined
brrt samcv++ 09:50
nine: in the way that the MoarVM JIT does much less than what, say, a JIT for v8 does 09:51
10:08 zakharyas1 joined
jnthn What most others call their JIT consists of our spesh and JIT subsystems together 10:09
I'm not really sure how much we should worry about this though 10:11
I think most of the world knows JIT in a fairly hand-wavey way, such as "thingy that produces machine code so stuff runs faster" 10:12
nwc10 only thing is that we have a platform-independant JIT in "spesh" 10:13
10:13 geospeck joined
nwc10 and that might not be clear 10:13
jnthn Right
brrt so renaming it to a native code generator would make stuff more clear
nwc10 so, platform independant JIT and CPU specific JIT :-) 10:14
jnthn It would, though for marketing purposes we'd still want to say "we have a JIT" :)
nwc10 or yes "native code generator" and "platform independant JIT"
but yes, totally agree, the magic CV keyword needs to be "JIT" else HR reject us
brrt hehe
jnthn Aww, my twitter feed has a bunch of "look, we have snow!" posts from around Europe. I ain't got any. 10:21
moritz the forecast says +10Ā°C for today (southern Germany) 10:24
far from snow
10:26 robertle joined
nwc10 we have rain 10:29
jnthn We've had and will likely have more rain 10:32
10:56 zakharyas joined 12:00 domidumont joined 12:51 domidumont joined 13:07 geospeck joined
dogbert17 had to wade through snow 13:56
jnthn :) 13:57
dogbert17 jnthn: interested in taking a look at a gist? Is it something I should report against MoarVM? gist.github.com/dogbert17/cebea492...1a183813fc
dogbert17 now has 32 and 64 bit vm's at his disposal 13:58
jnthn Hm, interesting 13:59
Can file it
For now I want to try and figure out the last lastexpayload inline blocker 14:00
timotimo MasterDuke: i find it surprising that fsa-for-string-storage uses more memory than master
dogbert17 cool, will create an issue
Geth MoarVM/inline-lastexpayload: b9489ea879 | (Jonathan Worthington)++ | 2 files
Remove :noinline on lastexpayload
14:01
jnthn That's a rebase on master (since brrt++'s fix was cherry-pick'd) 14:02
Well, spesh nodelay + blocking built/tested fine 14:09
Now onto spectest
ah, yes, the same issue 14:13
(try "" ~~ /. ** {NaN}/) for ^1000; somehow stops disarming Failures after a while 14:14
The slightly good news is that it does it even without NODELAY 14:25
Which means it breaks as soon as the 65th inline, which is a whole load more tractable than the 6000th :)
Can inline Int (3309) into DYNQUANT_LIMITS (9664) 14:26
Is the one that busts things
dogbert17 github.com/MoarVM/MoarVM/issues/754 14:29
what happens if the JIT is disabled? 14:32
jnthn Same 14:36
Only disabling inlines hides it
dogbert17 so brrt is off the hook :) 14:40
14:55 markmont joined 15:05 zakharyas joined
timotimo hm. it's not losing a guard that it'd need? 15:19
jnthn No, it's about nqp::throwlexpayloadcaller 15:22
The inlined_and_not_lexical thingy is...very dubious 15:23
I think it was meant to be setting them onto the handlers duplicated from the inliner 15:24
Indeed, commenting out the lines that set that flag fixes the issue 15:25
Question is what it breaks
jnthn does a stress build/test to find out 15:26
Turns out, breaks the Rakudo build right at the end 15:32
timotimo in the core dist installation things
?
jnthn No, in create-moar-runner
timotimo wow, that's an extremely simple script, isn't it? 15:33
jnthn Yup, so that mechanism is obviously doing something important 15:34
timotimo i see the commit that added was yours 15:36
hehe.
it says it doesn't hold up any more once we can inline closure-cloned things
jnthn Commit ID?
timotimo github.com/MoarVM/MoarVM/commit/1185cabb 15:37
cabbage?
jnthn Also the comment on the flag...is stranger still. It matches what I'd expect a flag of that name to mean
But it wasn't being set in those cases
It was being set on an originally included one instead
timotimo so it's out of the perspective of the inlinee? 15:40
jnthn Well, the problem is that we need to make sure that when we inline, we don't accidentally find lexical handlers that we should not. 15:41
timotimo if that were the case, then i could understand it ā€¦ like some things would be inlined and lexically in scope, some things would be inlined from outside the lexical scope
and in multi-inlines, it can be even weirder
jnthn Right 15:42
timotimo higher-depth-inlines or whatever we call those
jnthn But what's odd is that this fix was...seemingly wrong
So now I have the original wrong fix that fixed something and broke the case I've been debugging today, and correcting it fixes my case today but breaks something else 15:45
I wonder if... 15:46
The multi return has an inline handler
It does throwlexpayloadcaller - that is, throw it lexically, but only after walking down one frame to the caller 15:47
When we inline `return`, then we rewite that into a throwlexpayload
timotimo you mean like "multi sub return"?
jnthn Yes
timotimo ah, but we forget to do the "ignore one frame" thing
maybe
but rewriting *caller to * should do that 15:48
now i get what you're saying
jnthn Well, my incorrect fix before disabled all lexical handlers of an inlinee
timotimo if multi sub return has kept its own return handler it could then catch the one without "caller"
when previously that one would have been ignored
jnthn Right, and I think my original wrong fix made that not happen
And we relied on that broken fix. 15:49
But it does a lot too much
And too little at the same time
timotimo so we'll have a sp_throwpayloadlexignoreonereturnhandler?
jnthn Not sure yet 15:50
There are quite a few cases to consider
The one that the current thing breaks is where we inline something that has a lexical return handler
That thing calls .fail
15:50 robertle joined
jnthn That fail does throwlexpayloadcaller 15:50
But because we have accidentally blocked out all of the lexical handlers in inlinees, then it doesn't find it 15:51
It instead finds the wrong one (that *would* have been marked with the flag in question if my original fix had been right) 15:52
So obviously correct that works, but then every case of an inline of `return` is now broken
*correcting 15:53
This has implications for moving inlines too
Because having them at the end means we can duplicate the handlers and block out the non-lexical ones 15:54
timotimo moving, as in put them in between rather than at the end?
jnthn Right
timotimo mhm
jnthn So an ideal fix for all of this will ideally be robust enough to handle that case also 15:55
Damn, this is tricky. 15:57
timotimo ... something with bit masks ... 15:58
nine Sounds to me like the simple flat handlers table may be too simple for the complex requirements. 16:06
16:07 geospeck joined 16:08 brrt joined
jnthn nine: When working on moving inlines, did you change it so that the handlers of inlinees were prepended rather than appended to the resultant handler table, ooc? 16:11
16:11 zakharyas joined
jnthn (So those innermost ones of the handler would be found first) 16:11
nine jnthn: no. But my knowledge about exceptions in MoarVM is rather basic. I'd not be surprised if that would be the actual source of the remaining issues 16:13
jnthn It's just the two commits at the head of inline_in_place? 16:14
If so, I'm not seeing anything that'd account for the changes
nine yes
jnthn Yeah, that may well be it
Because the exception ahndler table is searched linearly
With the exceptation that innermost handlers come earliest
nine But why would the location of the inlined code make a difference there?
jnthn Consider: 16:15
nine Unless the table was "wrong" even before my changes and it was just luck that the code worked anyway :)
jnthn Handler Starts
some-call-here()
Handler Ends
The current "place the inlined code at the end" model results in the inlinee being placed after the HandlerEnds 16:16
Any handlers inside of the inlined call are copied into the table
Then we duplicate any handlers in effect at the point of the call
The original handlers don't count, because the code region they cover is outside of the inlined code 16:17
Since it's appended at the end
If we do the inline "in place" then the inlined code falls without the original handler's locations
So they apply
And if they're first in the table, they'll be found before any of the handlers in the code that was inlined
Meaning it could, for example, jump out of the wrong loop on a `last` 16:18
Or skip over a CATCH handler
And hit an outer one instead
uh, falls *within* the original handler's locations, I meant above 16:19
The handlers table is just a set of start/end addresses.
nine Ah, I see. So it's actually a wonder that so much works despite my ignorance of this :)
jnthn :) 16:20
16:20 squashable6 joined
jnthn Trouble is that I think we refer to handlers by their index in a bunch of annotations 16:21
nine But it also sounds to me like inline_in_place should make finding a correct solution easier. We shouldn't actually need to duplicate handlers as the original handlers cover the inlined code quite well.
jnthn So yes, we lose the "need to duplicate" problem
Except that was an immutable solution :) 16:22
And now we need to prepend...meaning we have a load of handler indexes all over the spesh graph in annotations that will have values that become out of date upon the prepend
Of course, everything can be solved with another LoI :) 16:23
But I suspect this is much of why I did the "put them at the end" in the first place
nine Are you sure the flat table is still a good data structure for managing handlers?
timotimo handlers aren't always nested cleanly 16:24
jnthn Well, it ticks a bunch of boxes 16:25
The performance of not only every exception throw, but also every otherwise unoptimized return, next, last, take, emit, etc. depends on doing swift lookups 16:26
A set of integer comparisons in a table that's relatively compact in memory is good in that sense.
16:26 brrt joined
jnthn (Also, the hardware likes linear reads through things, as it can predict them well with caches) 16:27
There may be alternatives with good properties too
timotimo we have all starts and ends in one contiguous array and extra info separately already? 16:28
nine Maybe multiple tables? Like switching out the handlers table on entering inlined code and switching back afterwards?
jnthn Well, the other goal is "no runtime cost unless an exception is thrown"
nine And the inlinee's handlers table contains its lexical handlers and all the dynamic ones
If you can make it behave correctly :) 16:29
jnthn timotimo: We don't, but the other fields in there aren't that huge either. Granted we could maybe get a small win by compacting them 16:30
nine Switching tables could be 2 very cheap ops and the tables to search would be smaller overall.
jnthn Well, an unconditional branch is arguably also a cheap op. :)
Which was kinda my reasoning for why sticking them at the end wasn't such a bad thing. 16:31
I'd be a bit sad to lose those and then gain two new things that's just as much work
nine So why do we try to move them back?
We'd also gain more correctness. 16:32
jnthn Largely because the inlinee might then compact into a basic block with the inliner
Which is good for the expr JIT
timotimo the thing with "entering and leaving an inline causes tables to switch" can also be done with a flat table and bitmasks
jnthn Instead of switching tables, we could just have a set of regions that pick a table 16:33
So the first scan is for an applicable table 16:34
timotimo that, too
jnthn We could potentially then mark up each of those with outer/caller indices
nine We'd also only need to switch table if the inlinee has any handlers. Is that common?
jnthn So we could traverse the tables differently in dynamic vs outer contexts 16:35
nine: Yes; because `return`
Also loop flow control
I think a set of structured tables, the first table picked by reigion, and chained for both outer and dynamic, may well do it 16:38
It'd save a bunch of lookups 16:39
In that we can skip irrelevant tables
Essentially, it's kinda what nine is suggesting but without the need for ops; we just find the table in the same way we'd find the current inline at a deopt point 16:40
And really I guess all this is really doing is preserving the callee/caller/outer structure for the sake of handler resolution post-inline 16:46
Rather than mashing it all together
Geth MoarVM: b9826e1764 | (Aleks-Daniel Jakimenko-Aleksejev)++ | docs/ChangeLog
Point to moarvm wiki for the next changelog

The contents of the wiki page will be move into the changelog file before the release.
Sister commit: github.com/rakudo/rakudo/commit/47...60226e52d0
16:47
jnthn wrote some notes on that 17:04
17:04 zakharyas joined
jnthn Occurs to me that the nested tables can actually hang off MVMSpeshInline 17:05
17:11 Zoffix joined
jnthn OK, tired...but glad I've got to the bottom of what's going on and a seemingly good path forward 17:18
And maybe it'll help with nine++'s inline moving work too 17:19
17:23 zakharyas joined 17:29 domidumont joined
nine That would be nice :) 17:44
17:44 Zoffix left
nine In any case, I've at least learned a lot from this discussion 17:44
Geth MoarVM: 26ad7cc55d | (Aleks-Daniel Jakimenko-Aleksejev)++ | docs/release_guide.md
Be more specific about tar uploads (release guide)
18:26
19:47 evalable6 joined 20:11 zakharyas joined
dogbert17 .seen timotimo 20:29
yoleaux I saw timotimo 17:44Z in #perl6: <timotimo> javascript has Object.assign which allows you to merge multiple objects i believe
dogbert17 timotimo, are there any known issues with the --valgrind option for MoarVM builds 20:38
on my 64 bit VM perl6-valgrind-m crashes with SEGV and, among other things the comment 'Memcheck: mc_main.c:5525 (vgMemCheck_is_valid_aligned_word): Assertion 'VG_IS_WORD_ALIGNED(a)' failed.' 20:40
interestingly it always point to the same place in the code, i.e. github.com/MoarVM/MoarVM/blob/mast...NFA.c#L239 20:43
20:51 brrt joined 20:56 geospeck joined
MasterDuke timotimo: this is what my branch looks like now: github.com/MoarVM/MoarVM/compare/m...ng_storage 21:10
timotimo, jnthn, nine, samcv, et al.: notice anything obviously wrong ^^^ ? 21:11
21:44 xi- joined 22:03 xi- joined 22:31 brrt joined 22:53 markmont joined 23:06 xi- joined