timotimo | i believe so | 00:00 | |
MasterDuke | even if the string has a bunch of strands with weird combinations of combiners and such? | ||
timotimo | yeah | 00:01 | |
when we have strands we always* make sure that combiners work properly | |||
jnthn | Yes, we make sure that is always correct | 00:02 | |
timotimo | if necessary we add an extra strand that combines combiners from one strand with a character from another and move the beginning and end points of the other strands so we don't have duplicates | ||
jnthn | 'night o/ | 00:03 | |
timotimo | night jnthn | ||
MasterDuke | hm, i have questions about possible optimizations to collapse_strands, but need to go try to put a baby to sleep | ||
hopefully back soon... | 00:04 | ||
timotimo | now i kind of want to go through the whole heap of a random moarvm program and see if any big strings are only being kept alive by a tiny substr being placed on it ... or something | ||
actually, the heap analyzer could do this | 00:05 | ||
no, not quite | |||
a query like "select * from collectables where incidents == 1 and repr == "VMString" | 00:06 | ||
string 0x46ee340 usage: 38.161694% - 1285 wasted, 2078 graphs | 00:24 | ||
string 0x7f31da977438 usage: 40.658800% - 1207 wasted, 2034 graphs | |||
string 0x7f31dd3f01a0 usage: 71.388368% - 305 wasted, 1066 graphs | |||
00:31
MasterDuke joined
|
|||
MasterDuke | timotimo: that's cool, how are you generating those numbers? | 00:31 | |
timotimo | gist.github.com/timo/2fd64d7aab2f0...5ddf6007ec | 00:32 | |
MasterDuke | think a heuristic in substr to create flat strings instead of strands if the ratio between the substring and the full string is too great makes sense? | 00:38 | |
timotimo | doesn't sound bad, no | 00:45 | |
what this patch+script doesn't handle is when there used to be many things referencing a string but after gc there's only one thing referencing a little part of it | 00:46 | ||
so it should have a marker for when gc started or ended, and split by that, and such | |||
or maybe only report on what comes after the last marker | 00:51 | ||
MasterDuke | hm, there is already a case in MVM_string_substring that creates a flat buffer, could just add another conditional to the two cases before it so it wins when the ratio is too big | 00:52 | |
`if (a->body.storage_type != MVM_STRING_STRAND) { <create strand> } else if (a->body.num_strands == 1 && a->body.storage.strands[0].repetitions == 0) { <create strand> } else { <create blob string> }` | 00:53 | ||
^^^ what exists now | |||
timotimo | wait, == 0? | 00:54 | |
MasterDuke | /* Single strand string; quite possibly already a substring. We'll just produce an updated view. */ | 00:55 | |
timotimo | i thought single repetition would have 1 rather than 0 | 00:56 | |
i suppose 0 is better because calloc initializes to taht | |||
MasterDuke | no, that's why i returned repetition + 1 in my recent PR | 00:57 | |
timotimo | ok, but the if where repetitions == 0 is dead code, then? | ||
01:24
MasterDuke joined
|
|||
MasterDuke | ugh, a recent kernel has done bad things to my wifi driver, my connection keeps dropping out | 01:25 | |
anyway, no, repetitions gets set to 0 in several places | |||
timotimo | oh? maybe i'm hit by the same thing. i've been experiencing silent connection deaths now for like a month | 01:32 | |
it'd still show that i'm connected to the wifi, but no data comes through | |||
MasterDuke | i used to get something like that occasionally, but this is much more obvious | 01:33 | |
01:33
MasterDuke_ joined
|
|||
MasterDuke_ | ugh, there is went again | 01:33 | |
my dmesg shows lots of: wlp2s0: failed to remove key (2, ff:ff:ff:ff:ff:ff) from hardware (-22) | 01:34 | ||
timotimo | i just booted up my laptop a few minutes ago, so it didn't drop yet | ||
MasterDuke_ | apparently it's a regression in the most recent ubuntu 17.10 kernel, but i haven't gotten around to attempting any of the workarounds (and am just hoping they put out a new kernel soon) | 01:35 | |
so back to collapse_strands. it currently uses the grapheme iterator | 01:36 | ||
any particular reason it couldn't just copy each strand in whole? | 01:37 | ||
using something efficient like memset for cases of grapheme_8's with repetitions? | |||
timotimo | memcpy* | 01:38 | |
MasterDuke_ | ? | ||
memset only in the case of something like "a" x 100 | 01:39 | ||
timotimo | oh | 01:43 | |
right | |||
how many places do you have to edit to get a new storage format in, i wonder ... | 01:48 | ||
MasterDuke_ | `collapse_strands` just uses `iterate_gi_into_string`, and `iterate_gi_into_string` is just doing `result->body.storage.blob_8 = MVM_string_gi_get_grapheme(tc, gi)` | ||
samcv | MasterDuke_: can't copy it unless all strands are the same bit type | ||
might be something i should do | |||
MasterDuke_ | samcv: right, it already converts to grapheme_32 if it ever hits something that won't fit in _8 | 01:49 | |
samcv | yeah | 01:50 | |
MasterDuke_ | so why not copy strand by strand instead of grapheme by grapheme? | ||
i know we can have strands of different bit type, but how common is that actually? | 01:51 | ||
of course if it is very common it might make copying strands not a win | |||
samcv | i'm writing some code right now to copy the memory areas instead of by grapheme right now | 01:59 | |
MasterDuke_: it's fairly common | |||
but it's worth writing code to copy the memory instead | 02:00 | ||
MasterDuke_ | cool | 02:01 | |
02:56
ilbot3 joined
|
|||
timotimo | (that's memory usage of core setting build) | 02:56 | |
MasterDuke_ | it went up? | 03:01 | |
timotimo | no, after patch is lower | ||
422171.2 "421204".."423128" | 03:02 | ||
474649.6 "474504".."474860" | |||
this is 'my @foo = "gen/moar/CORE.setting".IO.comb; say +@foo;' | 03:03 | ||
after patch vs before patch again | |||
MasterDuke_ | oh! nice | 03:04 | |
timotimo | there's test failures in nqp's test suite though | 03:06 | |
m: say "that's a saving of { (474649.6 - 422171.2) / 2119152 } bytes per character combed" | 03:07 | ||
camelia | that's a saving of 0.0247639 bytes per character combed | ||
timotimo | oh, kilobytes | ||
m: say "that's a saving of { (474649.6 - 422171.2) * 1024 / 2119152 } bytes per character combed" | |||
camelia | that's a saving of 25.3582006 bytes per character combed | ||
MasterDuke_ | what's the failure? | 03:10 | |
timotimo | matching \r and \R in regexes | ||
spectest will surely find out more | 03:12 | ||
MasterDuke_ | huh | 03:16 | |
think it's an easy fix? | 03:17 | ||
timotimo | probably | ||
but not before sleep | |||
MasterDuke_ | heh | ||
03:40
colomon joined
|
|||
samcv | MasterDuke_: i think i've almost got it. will use memcpy also on substr that have only one strand as well | 04:12 | |
MasterDuke_ | nice | 04:14 | |
done any benchmarking yet? | |||
samcv | took some time to get it working for 1 strand and all different types of strands. and getting that and the traditional way MVMROOTED rightbut going to run spectest now | ||
*going to spectest now. got nqp tests passing | |||
~4x faster | |||
when it is using the memcpy way | |||
MasterDuke_ | !! | ||
awesome | 04:15 | ||
samcv | doing a substr will return 32 bit string even if the section it chose had only 8 bits. but the speed improvements is probably worth it i'd think | ||
for now i have a clause in there that will throw if the result != orig so that should hopefully catch any issues. i thought it was working an hour ago but when i inserted that clause it revealed there *were* issues :P | 04:16 | ||
MasterDuke_ | isn't there some sort of convert_to_8bit_if_able function anyway? | ||
samcv | well the spectests didn't pass, but it was hard to debug cause i couldn't tell where it was introducing errors | ||
MasterDuke_: yeah, well it wouldn't be any faster in that case though | |||
compared to collapsing traditionally | |||
since the whole thing regarding this is not to have to access each element individually | 04:17 | ||
MasterDuke_ | ah, true | ||
well i'm off to sleep, but i'll look forward to backlogging | 04:19 | ||
samcv | :-) o/ night MasterDuke_ | ||
04:21
bloatable6 joined
|
|||
samcv | woo \o/ clean spectest | 04:22 | |
Geth | MoarVM: 8744857ed7 | (Samantha McVey)++ | src/strings/ops.c collapse_strands with memcpy if all strands are same type 4x faster If all the strands to collapse are of the same type (ASCII, 8bit, or 32bit) then use memcpy to collapse the strands. If they are not all the same type then we use the traditional grapheme iterator based collapsing that we previously used to collapse strands. This is 4-4.5x faster as long as all the strands are of the same type. |
04:33 | |
samcv | \o/ pushed | ||
05:44
lizmat joined
05:58
AlexDaniel joined
06:24
domidumont joined
06:26
domidumont joined
06:31
domidumont joined
07:21
geospeck joined
07:55
BinGOs joined
08:07
brrt joined
08:15
brrt joined
|
|||
brrt | oh, that's funny... | 08:26 | |
i reduced the size of the expr nodes to 32 bits, and the time spent in building CORE.setting goes up | 08:27 | ||
lizmat | but perhaps it now builds under 1G ? | 08:28 | |
brrt | hmm, it may have been not a real result | 08:29 | |
i'm doing 10 consecutive runs and there's quite some variation | 08:30 | ||
i'm wondering if setting MVM_SPESH_BLOCKING to a true value will make them more consistent | 08:35 | ||
also, good morning | |||
Geth | MoarVM/jit-expr-optimizer: 51aa6e5d62 | (Bart Wiegmans)++ | 2 files Add NOOP expr operator During the tiling process we sometimes add tiles for the labels in conditional expressions, that have no operator associated. However, the operator in memory would then be 0, which is a valid operator type. That didn't matter until I changed the order of LOAD and COPY, since COPY ... (9 more lines) |
08:36 | |
MoarVM/jit-expr-optimizer: 366944125a | (Bart Wiegmans)++ | 11 files Add CONST_PTR indirection Previously, we'd put large constants directly in the expression nodes array, which means that each node had to be as large as a pointer (i.e. 64 bits wide). This is wasteful (we don't need that much space for most nodes) but it also ensures that we can't compare pointers accross runs, since address layout randomization will make ... (8 more lines) |
|||
08:47
zakharyas joined
08:54
domidumont joined
08:59
zakharyas1 joined
|
|||
lizmat | .tell samcv the collaps_strands change makes a lot of test files file, one example: gist.github.com/lizmat/c001084818a...10c13d78b3 | 09:01 | |
yoleaux | lizmat: I'll pass your message to samcv. | ||
nine | lizmat: that's odd since samcv++ explicitly mentioned the change passing spectests | 09:05 | |
lizmat | well, then maybe it's MacOS specific :-( | ||
it's definitely not an error that I've ever seen before | 09:06 | ||
nwc10 | I'm seeing the same backtrace for that (CentOS (not my choice) and ASAN) | 09:09 | |
er, a line number differs - possibly I'm on a slightly different NQP version | |||
so strange. why do different people get different versions of the truth? | |||
09:18
robertle joined
|
|||
lizmat | hmmm...weird: I still get the "Note the same matching" error after reverting the NQP bump | 09:30 | |
*Not | |||
nine | lizmat: but did you actually downgrade MoarVM? | 09:35 | |
lizmat | guess not :-) | ||
nine | The new collapse_strands doesn't handle strand repetitions. lizmat, could this be the issue? | 09:36 | |
lizmat | yeah looks like | 09:37 | |
samcv | lizmat: hmm weird it was passing spectest for me | 09:56 | |
yoleaux | 09:01Z <lizmat> samcv: the collaps_strands change makes a lot of test files file, one example: gist.github.com/lizmat/c001084818a...10c13d78b3 | ||
samcv | i will run it again | ||
09:56
statisfiable6 joined
|
|||
samcv | hmm i'm getting the same error. weird. i'll figure it out then | 09:57 | |
jnthn | morning o/ | ||
yoleaux | 09:44Z <Ven``> jnthn: Is there a way to take a parameterized role as a role parameter? | ||
09:51Z <Ven``> jnthn: I'm afraid I'm unclear, as usual. I mean taking a role you can parameterize. Like role B[::T[_, _]] { dd T[Int, Int]; }; B[Rational]; | |||
samcv | lizmat: that is... very odd. | 09:58 | |
hm ok i think i see | 09:59 | ||
Geth | MoarVM/jit-expr-optimizer: 11 commits pushed by (Bart Wiegmans)++ review: github.com/MoarVM/MoarVM/compare/3...a37d05c6ba |
10:03 | |
brrt | ^ is a rebase | ||
samcv | lizmat: ok this doesn't make sense | 10:18 | |
if what my printf's are telling me is that it's seeing a two strand string. both strands start at 0 index end at 1 index. but the total graphemes total should be 5 | 10:19 | ||
also the error you were getting was a conditional i added in there, and forgot to make the error more clear | |||
if i remove my checks all the tests in that file pass... | 10:24 | ||
so this seems to be a problem with the string that's being passed to the function | |||
10:32
domidumont joined
|
|||
Geth | MoarVM: 97de3a2acb | (Samantha McVey)++ | src/strings/ops.c Revert "collapse_strands with memcpy if all strands are same type 4x faster" Reverting due to some possible corruption occuring inside or prior to this function being called in roast. Will revert until I fully investigate the cause. This reverts commit 8744857ed735833e1982384c2e01dea6450448ce. |
10:36 | |
samcv | lizmat: going to revert for now, as i'm going to be going to bed and i'm not totally sure what is going on until i can do some detailed digging | ||
10:46
AlexDaniel joined
11:02
zakharyas joined
|
|||
samcv | releasable6: status | 11:03 | |
releasable6 | samcv, Next release in 4 days and ā7 hours. No blockers. 0 out of 186 commits logged | ||
samcv, Details: gist.github.com/20826c1ae090381cf5...da065e38db | |||
samcv | oh crap. ok looks like there's an issue in MVM_string_join. if i turn on NFG_CHECK i get failures running nqp tests | 11:04 | |
will look tomorrow | |||
should be able to solve it in time :) | |||
should probably add that we should enable NFG_CHECK strict and run all tests before doing a moarvm release as well. to make sure it's remembered | 11:05 | ||
glad i spent a fair amount of time adding that debug functionality | |||
Differing grapheme at index 3 | |||
orig: 13 (\r) after re_nfg: -1 (\r\n) | |||
is what i'm getting when running nqp tests with NFG_CHECK and NFG_check strict | 11:06 | ||
NFG failure. Got different grapheme count of MVM_string_join. Got 8 but after re_nfg got 7 | |||
dogbert17 | o/ | 11:17 | |
jnthn | o/ dogbert17 | 11:20 | |
Still not having much luck with github.com/MoarVM/MoarVM/issues/749 ... been hunting it all monring :( | |||
nwc10 | lunch! | 11:23 | |
dogbert2 | oops, a nasty one then | 11:29 | |
perhaps it will turn out to be blog material :) | 11:30 | ||
11:31
releasable6 joined
|
|||
jnthn | Oh argh...I think I figured it out :/ | 11:33 | |
nwc10 | why the ":/" ? | 11:34 | |
jnthn | 'cus it's silly | 11:36 | |
Turns out uv_close needs calling on timers | 11:37 | ||
dogbert2 | might there be more places in the code where timers aren't closed? | 11:39 | |
there goes the blog post :) | 11:40 | ||
jnthn | I think there's only one place in the code that we do timers at all | 11:43 | |
jnthn already has enough blog posts to write... :) | |||
dogbert17 | with some luck, fixing this might resolve more problems | 11:52 | |
jnthn | I've got a few fixes that I did along the way | 11:56 | |
It seems that test is reliably clean under valgrind now too | |||
Will spectest now | 11:57 | ||
12:02
dogbert11 joined
|
|||
jnthn | Yay, clean | 12:02 | |
Geth | MoarVM: 3a656f9458 | (Jonathan Worthington)++ | src/io/timers.c Make sure to close timer handles Also, it's not safe to free their memory until after the close callback fires, so re-work things to do that. |
||
MoarVM: ca12618171 | (Jonathan Worthington)++ | src/6model/reprs/ConcBlockingQueue.c Fix occasional old reads in ConcBlockingQueue |
|||
MoarVM: e694386643 | (Jonathan Worthington)++ | 2 files Harden event loop against fast cancellations It was possible that a cancellation might manage to run ahead of the setup work; make sure that we handle such cases correctly. |
|||
dogbert11 | yay | 12:04 | |
jnthn | Lunch, bbl | 12:05 | |
dogbert11 | perhaps you managed to fix github.com/rakudo/rakudo/issues/1202 as well | ||
jnthn | Maybe the SEGV side of it | 12:06 | |
lizmat | that would be excellent :-) | ||
jnthn: ready for a bump ? | 12:07 | ||
jnthn | Hihgly unlikely the hang | ||
Yeah, can do | |||
really lunch :) | |||
12:08
geospeck joined
|
|||
dogbert11 | have a nice lunch | 12:12 | |
lizmat | alas, this did not fix 1202 for me: MoarVM panic: Heap corruption detected: pointer 0x1045f73b8 to past fromspace | 13:02 | |
after about 1.5 minute | 13:03 | ||
jnthn | That woulda been lucky | ||
dogbert11 | it always hangs for me, grr | 13:07 | |
13:31
brrt joined
|
|||
dogbert11 | jnthn, did you get a good lunch? | 13:58 | |
jnthn | Yeah, butternut squash soup :) | 14:00 | |
dogbert11 | Nice. Any good Indian restaurants in Prague? | 14:02 | |
lizmat | jnthn: RT #132225 still reliable crashes for me | 14:04 | |
synopsebot | RT#132225 [open]: rt.perl.org/Ticket/Display.html?id=132225 segmentation fault while concurrently updating SetHash | ||
lizmat | jnthn: lowering the number of threads, sometimes gives me: Cannot look up attributes in a VMNull type object | 14:05 | |
jnthn: which appears to happen in a Proxy.FETCH | 14:06 | ||
jnthn: which would imply that "self" would be a VMNull | |||
jnthn | lizmat: I'm not surprised, I doubt that'll not SEGV until we re-work VMHash | 14:07 | |
lizmat | jnthn: but all access are in a protect block ?? | 14:08 | |
jnthn: another data point: if we put another key into the Set, it doesn't segfault | 14:09 | ||
so only segfaults if we add / remove the first / last from the Set | 14:10 | ||
jnthn | Ah, then I've no idea what's happening, except maybe what was speculated about sink | 14:13 | |
lizmat | it also seems to leak like hell | 14:15 | |
in 4 seconds from 69MB to 278MB | 14:16 | ||
timotimo | mhh, that feeling when changes you made to strings causes some regex match to fail | ||
and you get "sorry, invalid precompilation id" when starting rakudo | 14:18 | ||
Geth | MoarVM: ae37b6b679 | (Jonathan Worthington)++ | 3 files Include reason when we cannot inline Just in the debug output now, however some profiling tool may find it useful to gather up these reasons in the future also. |
14:23 | |
timotimo | for some reason with my change a . will now match against <-[A..Za..z0..9._-]> | ||
dogbert11 | jnthn: any immediate theories wrt gist.github.com/dogbert17/d43421ca...4d54c1c468 | 14:28 | |
14:28
brrt joined
|
|||
jnthn | Hm, looks like a bad assumption about a thread living on | 14:29 | |
dogbert11 | it's my latest retest of github.com/MoarVM/MoarVM/issues/728 | ||
it does not cause asan to complain every run, more like one in two | 14:31 | ||
jnthn tries to remember why lastexpayload is marked :noinline | 14:33 | ||
timotimo | yeah, i removed that recently and spec tested, there was some failure but i didn't have time to investigate at all | 14:34 | |
jnthn | It's what blocks inlining of things like postcircumfix:<[ ]> | ||
timotimo | anything with some kind of return handler, right? | 14:36 | |
jnthn | Yeah | ||
jnthn spesh stresses with it on | 14:39 | ||
timotimo: Can you remember how recently? :) | |||
timotimo | this or last month | 14:40 | |
jnthn | Hm, OK | ||
That is probably since I did all the exception handler cleaning up in spesh... | |||
timotimo | irclog.perlgeek.de/moarvm/2017-11-02#i_15389838 | ||
jnthn | It makes it through NQP build/test | ||
m: say 5.688 / 6.374 | 14:41 | ||
camelia | 0.892375 | ||
jnthn | m: say 8.801 / 9.768 | ||
camelia | 0.901003 | ||
jnthn | Worth 10% on some array access benchmarks | 14:42 | |
timotimo | awesome | ||
dogbert11 | jnthn++ | ||
jnthn | Only if it won't break things | ||
ah, a failure in t/02-rakudo/repl.t of all places... | |||
t/05-messages/01-errors.t also | 14:43 | ||
timotimo | ah | ||
because of stack traces becoming shorter, no doubt | |||
jnthn | Hm, the errors make no sense | ||
"Cannot receive a message on a closed channel" | 14:44 | ||
o.O | |||
timotimo | :o | ||
jnthn | Hopefully the stresstest gives a simpler case | 14:45 | |
lizmat | jnthn: fwiw, I had t/02-rakudo/repl.t flap on me today | 14:46 | |
so that may be another issue | |||
jnthn | So far it looks like the spectest is only going to tell me "yeah, something about proc bust" | 14:48 | |
(With said closed channel error) | |||
Of course, it's possible that the problem is that it permits inlining something that *otherwise* breaks | 14:49 | ||
timotimo | spesh bisect time? | ||
jnthn | Well, I know it's an inline causing it | ||
ah, I think I've got a simpler case | 14:52 | ||
15:02
zakharyas joined
|
|||
jnthn | m: multi foo(Int) { callsame }; multi foo(Any) { 1 }; for ^1000000 { foo(1) } | 15:37 | |
camelia | ( no output ) | ||
jnthn | Golfs to that | ||
Turns out that we never eliminate the exception handler when there's a call | 15:38 | ||
And the blocking of it being inlined in turn blocks inlining of things that might do callsame | 15:39 | ||
(Also blocks a hell of a lot more, of course) | |||
Which hides the fact that the dispatcher mechanism can't cope with inlining | 15:40 | ||
timotimo | so we mark something else :noinline, or can we comfortably fix the dispatcher mechanism to "get it"? would we have to keep the dispatcher around in a variable and reset it after an inlined call or something? | 15:41 | |
well, we do already store the dispatcher in $*DISPATCHER all the time | 15:42 | ||
i don't claim to understand how the dispatcher mechanism works: ) | |||
jnthn | It's...non-trivial...to fix :( | 15:43 | |
oh, but | 15:44 | ||
Perl6::Optimizer already looks for explicit mentions of callsame and relies on that | |||
So if we could just set a "don't inline this" flag at that level that makes it down to the VM, we're probably good | 15:46 | ||
I think we'll probably have to do it that way | |||
timotimo | how about we invent a noinline op that we can codegen and all it does is prevent inlining, otherwise it's a nop? | 15:47 | |
that way we don't have to have a flag or something | |||
jnthn | I think it's cleaner on a block | 15:51 | |
There's already a set of flags on frames anyway | 15:53 | ||
That we have infrastructure for passing down | |||
So it's not even a big change to add it | |||
15:54
zakharyas joined
|
|||
timotimo | ah, neat | 15:56 | |
16:05
AlexDaniel joined
16:06
brrt joined
|
|||
jnthn | Darnit. That flag does fix a bunch of the tests...but the thing busting proc turns out to be a different issue | 16:25 | |
Geth | MoarVM: b9a01f750e | (Jonathan Worthington)++ | 5 files Add support for a block noinline flag Used to indicate that a block may never be inlined; will bet set by Rakudo upon spotting uses of things that need the dispatcher, which it already relies on spotting by name in the static optimizer anyway. |
16:27 | |
[Coke] finds it very slightly weird that MoarVM has nqp code in it. | 16:29 | ||
timotimo | [Coke]: somehow nqp has to know about all the consts and such | 16:30 | |
and how to interop with the mast compiler | 16:31 | ||
though that might change in the future to use bufs and such instead of objects | |||
jnthn | [Coke]: It has perl 6 scripts in tools/ too ;) | 16:32 | |
"huh, why is vim taking so long to load this spesh log"..."oh, it's 265 MB o.O" | 16:41 | ||
16:43
domidumont joined
16:51
TimToady joined
16:55
geospeck left
|
|||
jnthn | So the problem is an exception handler "goes missing" | 16:56 | |
And an exception that should have been caught leaks out | |||
The handler annotations look fine though | |||
Turns out they are | |||
And with MVM_JIT_EXPR_DISABLE=1 set then things arefine | |||
*are fine | |||
timotimo | oh, interesting! | 16:59 | |
lizmat | m: END die # unrelated, I assume | ||
camelia | Unhandled exception: getexpayload needs a VMException, got P6opaque (X::AdHoc) at <unknown>:1 (/home/camelia/rakudo-m-inst-1/share/perl6/runtime/CORE.setting.moarvm:) from <unknown>:1 (/home/camelia/rakudo-m-inst-1/share/perl6/runtime/CORE.sā¦ |
||
jnthn | Very | ||
lizmat | ok | ||
just checking :-) | |||
jnthn | That's not hot enough to JIT :) | ||
lizmat | yeah, and probably fixable by me :-) | 17:00 | |
jnthn | Interestingly, you can get this to appear in t/02-rakudo/repl.t without even turning on nodelay/blocking | ||
timotimo | this is with a rakudo patch for the optimizer and such, yes? | 17:02 | |
jnthn | Yes | ||
Geth | MoarVM/inline-lastexpayload: b4021cea86 | (Jonathan Worthington)++ | 2 files Remove :noinline on lastexpayload |
17:06 | |
jnthn | .tell brrt the branch inline-lastexpayload fails t/02-rakudo/repl.t but works without the expr JIT. The problem is in the code at Proc.pm:73 in Rakudo; the try thunk is inlined into the enclosing block (not new, I think), but now the .receive method was inlined into the thunk with my branch. So it seems to have needed the nested inline to trigger it. | 17:15 | |
yoleaux | jnthn: I'll pass your message to brrt. | ||
17:33
zakharyas joined
|
|||
jnthn | There's one further issue besides that one, which I've not got to the bottom of yet, which results in Failure not being marked as handled even though .so is called | 17:55 | |
Which is very odd | |||
Wow, it actually stops calling .Bool, somehow | 17:59 | ||
Will have to continue some other time...will be a nice improvemnet to inline postcircumfix:<[ ]> and probably many other things, once we get the kinks worked out | 18:02 | ||
jnthn bbl | 18:05 | ||
Zoffix | This reminds me of rt.perl.org/Public/Bug/Display.htm...et-history | 18:10 | |
Where Method.gist gets lost and Mu::gist is used instead, for some reason | 18:11 | ||
18:13
AlexDaniel joined
18:27
zakharyas joined
18:39
zakharyas joined
|
|||
samcv | good * MoarVM :-) | 19:41 | |
samcv cracks knuckles and starts trying to find the NFG_CHECK failure | |||
dogbert11 | samcv, have you had your morning tea yet? | 19:44 | |
samcv | drinking it this second :) | ||
dogbert11 | :) | ||
19:53
TimToady joined
19:54
robertle joined
|
|||
samcv | looks like the problem was in 2017.10 also | 19:56 | |
dogbert11 | still one the hunt then | 20:11 | |
*on | |||
samcv: are you looking for why this happens: | 20:35 | ||
NFG failure. Got different grapheme count of MVM_string_join. Got 5 but after re_nfg got 3 | |||
Differing grapheme at index 0 | |||
orig: 68 (D) after re_nfg: 7690 (įø) | |||
samcv | yeah i have a fix now | ||
dogbert11 | coool | ||
that tea sure helped :) | 20:36 | ||
Geth | MoarVM: e0ca68c9d7 | (Samantha McVey)++ | src/strings/ops.c Only run NFG_CHECK if concat is stable If concat is not stable, we we already *know* concat isn't stable, so don't run it and cause misleading results. |
||
samcv | dogbert11: fixed | ||
the fix was much easier than i thought at first :P but once i figured out everything was running fine and bisecting it and getting nowhere turns out the answer was just that we shouldn't run NFG_CHECK if concats_stable = 0. only if we are not going to run re_nfg on it | 20:37 | ||
dogbert11 | nice fix | 20:40 | |
t/spec/S32-str/utf8-c8.t still causes trouble though | 20:41 | ||
samcv | will check after i rebuild nqp and rakudo :) | 20:42 | |
nwc10 | fresh head for the win | ||
or maybe it was the tea | |||
dogbert11 only rebuilt MoarVM ... | |||
timotimo | samcv: i started work on a new storage format for MVMString | 20:53 | |
but it breaks regex matching | |||
wanna have a look at the code so far? | |||
Geth | MoarVM/in-situ-strings: 3ab9834d50 | (Timo Paulssen)++ | 6 files store short strings inside string's body uses the 8 bytes that the pointer to a buffer usually takes and puts up to 8 8-bit characters into it. Currently causes trouble with regex matches. |
21:22 | |
timotimo | that's the one | 21:23 | |
the last time i tried to implement this was before grapheme iterators, i believe. so i would have had to change basically every piece of code that handles strings | |||
if you take out the code that does it in latin1.c it'll get through a rakudo compilation | 21:30 | ||
but a whole lot of strings get to be latin1 if they come out of the string heap | 21:31 | ||
oh, well wasn't that easy | 21:32 | ||
bleh. i don't like when it crashes in nfa's unmanaged_size function | 21:35 | ||
dinner time i guess | 21:36 | ||
21:36
zakharyas joined,
releasable6 joined
22:02
unicodable6 joined
22:03
committable6 joined,
bisectable6 joined,
bloatable6 joined,
statisfiable6 joined
|
|||
samcv | dogbert11: i'm checking some other NFG_CHECK failures | 23:28 | |
i'm getting one for t/spec/integration/advent2010-day03.t it seems to be failing with a utf8-c8 string | 23:33 | ||
ah it's a path | 23:34 | ||
i've gotten it down to: `dir.map({ .relative });` this in the rakudo git folder will trigger it | 23:46 |