samcv \x[298F]\x[2990]\x[298D]\x[298E] ok 00:08
these two are switched
i just checked all of our delimiters we currently have in nqp's bracket file. and these two sets of brackets are the only that are incorrect with their matcher 00:09
dalek p: cd8973d | samcv++ | src/HLL/Grammar.nqp:
Fix two bracket pairs which had each others closing delimiters

Per BidiBrackets.txt Unicode 9.0:
298D; 2990; o # LEFT SQUARE BRACKET WITH TICK IN TOP CORNER 298E; 298F; c # RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER
Previously the Left Top Corner was paired with right bottom corner and the Left Bottom Corner was paired with Right Top Corner which was incorrect.
IRC log: irclog.perlgeek.de/perl6/2017-01-03#i_13839721
03:03
ast: 3044240 | samcv++ | S02-literals/quoting-unicode.t:
Fix two bracket pairs which had each others closing delimiters

Per BidiBrackets.txt Unicode 9.0:
298D; 2990; o # LEFT SQUARE BRACKET WITH TICK IN TOP CORNER 298E; 298F; c # RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER
Previously the Left Top Corner was paired with right bottom corner and the Left Bottom Corner was paired with Right Top Corner which was incorrect.
IRC log: irclog.perlgeek.de/perl6/2017-01-03#i_13839721
03:04
kudo/nom: 76283f6 | samcv++ | tools/build/NQP_REVISION:
Bump NQP: Fix two bracket pairs which had each others closing delimiters

Per BidiBrackets.txt Unicode 9.0:
298D; 2990; o # LEFT SQUARE BRACKET WITH TICK IN TOP CORNER 298E; 298F; c # RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER
Previously the Left Top Corner was paired with right bottom corner and the Left Bottom Corner was paired with Right Top Corner which was incorrect.
IRC log: irclog.perlgeek.de/perl6/2017-01-03#i_13839721
03:11
ast/6.c-errata: e82c278 | samcv++ | S02-literals/quoting-unicode.t:
Fix two bracket pairs which had each others closing delimiters

Per BidiBrackets.txt Unicode 9.0:
298D; 2990; o # LEFT SQUARE BRACKET WITH TICK IN TOP CORNER 298E; 298F; c # RIGHT SQUARE BRACKET WITH TICK IN BOTTOM CORNER
Previously the Left Top Corner was paired with right bottom corner and the Left Bottom Corner was paired with Right Top Corner which was incorrect.
IRC log: irclog.perlgeek.de/perl6/2017-01-03#i_13839721
03:16
ast: 5b8fb62 | samcv++ | S02-literals/quoting-unicode.t:
Fix a typo and make it fit 80 column width
03:25
ast/6.c-errata: ae13900 | samcv++ | S02-literals/quoting-unicode.t:
Fix a typo and make it fit 80 column width
03:26
ast: 5a6fe25 | samcv++ | S02-lexical-conventions/unicode.t:
Correct unicode.t as well so for the ticked brackets
03:35
ast/6.c-errata: 785cc71 | samcv++ | S02-lexical-conventions/unicode.t:
Correct unicode.t as well so for the ticked brackets
samcv ok i think that's it 03:37
[Tux] last night: 07:18
This is Rakudo version 2016.12-187-g61887712e built on MoarVM version 2016.12-50-g9a9f2d46
csv-ip5xs 3.054
test 12.655
test-t 5.213
csv-parser 13.303
this morning:
This is Rakudo version 2016.12-190-g76283f6e1 built on MoarVM version 2016.12-50-g9a9f2d46
csv-ip5xs 3.096
test 13.906
test-t 5.675
csv-parser 14.553
samcv yeye speedz 07:22
timotimo it noised upwards quite a bit 07:24
buggable: speed
buggable timotimo, ▅▅▅▅▅█▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▃▃▃▂▂▂▂▂▂▂▁▁▁▁▂▁▂▃▁▁▂▁▂▁▂▁▃▃ data for 2016-12-12–2017-01-03; variance: 5.137s–7.592s
[Tux] timotimo: both pates were the fastest of two consecutive runs. the alternative for -190 was 5.9 :( 07:26
timotimo oof
masak wait, the variance is given as an interval? 07:34
isn't that a confience interval or something? 07:35
timotimo it's not like a standard deviation, it's just the minmax, i believe
masak variance is usually taken to mean the square of the standard deviation
timotimo right
nine I'm quite sure it's just minmax 07:48
arnsholt In that case, I suggest s/variance/range/ 07:51
[Tux] jnthn, got a new one :( 08:02
# expected: Buf.new(61,125,17,108,54,202,12,120,39,225,91,9,125,124,163,24,100,110,156,192,137) 08:03
# got: Buf.new(61,125,17,108,54,202,12,120,39,225,91,9,125,124,163,24,100,110,156,9)
# expected: Buf.new(61,57,204,118,97,221,164,168,63,30,168,197,108,198,67,28,111,192,161,122,96)
# got: Buf.new(61,57,204,118,97,221,164,168,63,30,168,197,108,198,67,28,111,33,122,96)
# expected: Buf.new(61,180,192,142,191,171,181,101,4,238,122,232,11,194,77,144,221,109,108,228,192)
# got: Buf.new(61,180,14,191,171,181,101,4,238,122,232,11,194,77,144,221,109,108,228,192)
looks like they all have 192 in them
arnsholt This Unicode shenanigans? 08:08
If those are supposed to be UTF-8 byte sequences, 192 followed by 137, 161 and 142 are all invalid 08:14
Those can all be replaced by shorter sequences (in this case, 9, 33 and 14)
[Tux] arnsholt, utf8-c8 08:21
the underlying idea is that one can store real binary data (JPEG images) as utf8-c8 08:22
that way string functions can be used instead of Buf
samcv 8 bit clean is just like utf-8 but like with 7 bytes? 08:23
or some bs? like that
arnsholt [Tux]: Right. So what are the underlying bytes in those cases? 08:41
'Cause those expected bytes are very iffy as UTF-8, but I don't know if utf8-c8 relaxes the UTF-8 constraints in some way to let you store arbitrary bytes 08:44
timotimo the whole point of utf8-c8 is to let you store arbitrary bytes as utf8 and have the non-valid parts at least round-trip through encoding as utf8-c8 again 08:51
arnsholt Right. So you read bytes in, and the bits that happen to be valid UTF-8 codepoints get decoded as such, and the ones that aren't just get read in as bytes? 08:56
timotimo they get synthetics created for them 08:57
m: say Buf.new(^255 .roll(32)).decode('utf8-c8')
camelia rakudo-moar 76283f: OUTPUT«􏿽xE0􏿽xEFP#/􏿽xA8􏿽xE6􏿽x98Ԩ#k-􏿽x8E􏿽xF3􏿽x9Bb􏿽xB9􏿽xBCek2􏿽xBB􏿽xB3􏿽xC4␤»
arnsholt Right 08:58
In that case, it's probably the decoder not recognizing that 192 is an invalid UTF-8 leader byte 08:59
timotimo m: say 0x7f 09:03
camelia rakudo-moar 76283f: OUTPUT«127␤»
timotimo it compares > 0x7f
arnsholt In utf8_c8.c:classify right? 09:06
timotimo oh, i totally glanced past that
arnsholt Heh! 09:07
First hit for 7f in the file =D
Anyways, I think classify needs to consider that 1100 0000, 1110 0000, etc. are invalid 09:08
timotimo oooh 09:09
well
there's EXPECT_CONTINUATION 09:10
if the continuation byte doesn't have its first two bits 10, it'll "process_bad_bytes"
arnsholt Yeah, that's the code I think 09:13
timotimo note also that what came after the 192 was not a null byte
arnsholt If the byte is exactly 0b11000000 it's invalid
Similarly for 0b11100000 and friends 09:14
Or wait
This is more complicated, I think
0b11100000 is actually valid I *think*
But only if the next two bytes are sufficiently full of bits 09:15
timotimo oh you're so full of bits :P 09:16
arnsholt I guess process_ok_codepoint needs to account for this, or something
nwc10 good UGT, #perl6-dev
arnsholt o/
timotimo m: say Buf.new(110,156,192,137).decode('utf8-c8').encode('utf8-c8)
camelia rakudo-moar 76283f: OUTPUT«===SORRY!=== Error while compiling <tmp>␤Unable to parse expression in single quotes; couldn't find final "'" ␤at <tmp>:1␤------> 7).decode('utf8-c8').encode('utf8-c8)⏏<EOL>␤ expecting any of:␤ argument list␤ …»
timotimo m: say Buf.new(110,156,192,137).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(110,156,9)␤»
timotimo m: say Buf.new(156,192,137).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(156,9)␤»
timotimo that's enough to trigger the problem 09:17
m: say Buf.new(156,192).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(156,192)␤»
timotimo that isn't
m: say Buf.new(156,192, 0).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(156,192,0)␤»
timotimo m: say Buf.new(156,192, 1).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(156,192,1)␤»
timotimo m: say Buf.new(156,192, 129).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(156,1)␤»
arnsholt m: say Buf.new(192, 129).decode('utf8-c8').encode('utf8-c8').perl # And this, I guess? 09:19
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(1)␤»
timotimo seems like
nine j: say $*PERL.compiler
camelia rakudo-jvm 8ca367: OUTPUT«rakudo (2016.11.71.g.8.ca.367.d)␤»
arnsholt And probably...
*scribblescribble*
m: say Buf.new(192, 128).decode('utf8-c8').encode('utf8-c8').perl 09:20
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(0)␤»
timotimo that's also wrong, yeah
arnsholt m: say Buf.new(0xe0, 128, 128).decode('utf8-c8').encode('utf8-c8').perl
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(0)␤»
arnsholt m: say Buf.new(0xf0, 128, 128, 128).decode('utf8-c8').encode('utf8-c8').perl 09:21
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(0)␤»
arnsholt makes an issue 09:23
timotimo at least the thing doesn't asplode any more due to bad memory reads :) 09:24
nwc10 yep. ASAN does not consider your bugs worthy of comment. 09:25
jnthn++
|Tux| arnsholt, there already is/was an issue 09:44
nwc10 am I right in thinking than m-spectest5 is only able to run in parallel on *nix, whereas m-spectest6 can also run in parallel on Win32?
timotimo i think that's what it is, yeah 09:46
though i think it isn't 100% reliable yet
arnsholt [Tux]: For MoarVM? 09:47
Too late anyways. I've written it up as MoarVM#481
|Tux| I couldn't find it anyway 09:48
nwc10 yes, not 100% reliable. ASAN occasionally triggers. jnthn knows why (but I forget why)
arnsholt Me neither, when I searched just now
m: say Buf.new(0b1100_0001, 0b0000_0000).decode('utf8-c8').encode('utf8-c8').perl # Maybe this one too? 09:50
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(193,0)␤»
arnsholt Apparently not 09:51
Oh, derp
m: say Buf.new(0b1100_0001, 0b1000_0000).decode('utf8-c8').encode('utf8-c8').perl # Maybe *this* one too?
camelia rakudo-moar 76283f: OUTPUT«Blob[uint8].new(64)␤»
arnsholt There we go =D
bartolin yesterday we had a discussion about division by zero on nqp-j vs. nqp-m: irclog.perlgeek.de/perl6-dev/2017-...i_13833581 10:24
use nqp; say nqp::div_In($_,0) for -1,0,2
r: use nqp; say nqp::div_In($_,0) for -1,0,2
camelia ( no output )
..rakudo-moar 76283f: OUTPUT«-Inf␤NaN␤Inf␤»
bartolin this is a request for comments on the following change to nqp::div_In for nqp-j: github.com/usev6/nqp/commit/be4b51425a 10:25
I *think* that patch makes sense, but I'm not really sure -- so I'd like to get some feedback 10:26
dalek p: 81c0e83 | jnthn++ | tools/build/MOAR_REVISION:
Bump MOAR_REVISION for Unicode improvements.
10:51
kudo/nom: ee38721 | jnthn++ | tools/build/NQP_REVISION:
Get latest MoarVM with Unicode improvements.

  * Catch some further invalid sequences in utf8-c8 decoding (jnthn++)
  * Secondary/tertiary Unicode collation support (samcv++)
10:54
ast: 1c42eea | jnthn++ | S32-str/utf8-c8.t:
Some further utf8-c8 tests.

We didn't have any that covered invalid/over-length representations.
10:55
notviki bartolin: the results make sense to me. 11:07
No idea about the rest. I don't know java
buggable: speed 11:09
buggable notviki, ▅▅▅▅▅█▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▃▃▃▂▂▂▂▂▂▂▁▁▁▁▂▁▂▃▁▁▂▁▂▁▂▁▃▃ data for 2016-12-12–2017-01-03; range: 5.137s–7.592s
bartolin I wonder whether there is a better place to do the checks for division by zero. but it seems to be a good thing to do it in nqp::div_In, so that that operator behaves the same as on MoarVM 11:11
however, if noone suggests something better, I'll commit my patch later today. thanks for looking 11:13
psch bartolin: i'd say that's exactly where the check should go 11:20
lizmat m: class A { method sink { say "goodbye" } }; A # shouldn't this just need to say "goodbye" ??? 12:08
camelia rakudo-moar ee3872: OUTPUT«WARNINGS for <tmp>:␤Useless use of constant value A in sink context (line 1)␤»
psch we don't really seem to be using method sink for the sink warning, actually 12:13
lizmat hmmm... I seem to remember a time when it was 12:14
psch as in, the actual text appears in the Optimizer, and i don't see any spot where .sink is called explicitly
yeah, might be a regression, it certainly makes sense that it should be called
lizmat I was thinking about using this feature for the repl-here functionality 12:15
m: REPL
camelia rakudo-moar ee3872: OUTPUT«WARNINGS for <tmp>:␤Useless use of constant value REPL in sink context (line 1)␤»
lizmat you want a repl for debugging your code: just add a line with REPL :-)
or something like: 12:16
psch hide that behind a pragma maybe? 'cause the constant as such seems a bit too magical
lizmat m: REPL if %*ENV<REPL>
camelia rakudo-moar ee3872: OUTPUT«WARNINGS for <tmp>:␤Useless use of constant value REPL in sink context (line 1)␤»
psch well, assuming the .sink stuff works out in the first place :)
lizmat bisectable: help 12:17
bisectable6 lizmat, Like this: bisectable6: old=2015.12 new=HEAD exit 1 if (^∞).grep({ last })[5] // 0 == 4 # RT128181
synopsebot6 Link: rt.perl.org/rt3//Public/Bug/Displa...?id=128181
lizmat bisectable: old=2015.12 new=HEAD class A { method sink { say "goodbye" } }; A 12:18
bisectable6 lizmat, On both starting points (old=2015.12 new=ee38721) the exit code is 0 and the output is identical as well
lizmat, Output on both points: WARNINGS for /tmp/6YLL2XBaEU:␤Useless use of constant value A in sink context (line 1)
lizmat hmmm...
bisectable: old=2014.12 new=HEAD class A { method sink { say "goodbye" } }; A
bisectable6 lizmat, Bisecting by output (old=2014.12 new=ee38721) because on both starting points the exit code is 0
lizmat, bisect log: gist.github.com/cd8cd01db69a8b084c...7eedfcae0f
lizmat, (2015-12-17) github.com/rakudo/rakudo/commit/00...1883a6f15c
psch committable6: v6.c class A { method sink { say "sunk" } }; A 12:19
committable6: help
committable6 psch, ¦«2015.12,2016.02,2016.03,2016.04,2016.05,2016.06,2016.07.1,2016.08.1,2016.09,2016.10,2016.11,2016.12,HEAD»: WARNINGS for /tmp/3olyApeBtW:␤Useless use of constant value A in sink context (line 1)
psch, Like this: committable6: f583f22,HEAD say ‘hello’; say ‘world’
psch well, that fits with a pre 2015.12 commit at least
oh, yeah, the bisect commit definitely looks right too 12:20
lizmat yeah
I guess we would need to add checking for a .sink method if it is a class
psch that's gonna be somewhat annoying i guess, considering we're adding the worry in the optimizer 12:21
lizmat hmmm... even before that patch, .sink didn't get called 12:30
perhaps jnthn moritz TimToady can shine their light on this
m: class A {}; A.sink # this assumes there's a .sink somewhere 12:31
camelia ( no output )
psch committable6: 2015.11 class A { method sink { say "sunk" } }; A
committable6 psch, ¦«2015.11»:
psch well, Mu has .sink 12:32
m: Any.^lookup('sink').perl
camelia ( no output )
psch m: Any.^lookup('sink').perl.say
camelia rakudo-moar ee3872: OUTPUT«method sink (Mu $: *%_ --> Nil) { #`(Method|64799696) ... }␤»
dalek p: 6105446 | niner++ | src/vm/jvm/ModuleLoader.nqp:
Always add JVM class paths, even when --module-path is used

It's counter intuitive to leave out standard search paths just because a custom path is specified. Especially since --module-path takes only a single path and can only be used once.
12:33
nqp: 4b1c72c | niner++ | src/vm/jvm/ModuleLoader.nqp:
nqp: Add support for NQP_LIB environment variable in JVM ModuleLoader
nqp:
nqp: This is needed so the Perl 6 main.nqp can run with a custom "blib" search path.
nqp: Otherwise it would fail on the use statements when trying to compile the
nqp: setting because it would try to load the installed libraries instead of the
nqp: ones created during the build.
kudo/nom: directory to the front of the search path. Fixes setting compilation when
there are already Perl6:: NQP modules installed.
12:34
nine ^^^ Fixes the infamous "Missing or wrong version of dependency
'gen/jvm/stage2/QRegex.nqp'" when compiling Perl6::World
bartolin: ^^^
psch: ^^^
psch wow
nine++
that's honestly great
i guess that might've been fallout from my bootclasspath removal, in hindsight
nine I finally got fed up enough cleaning camelia's build to have a look at where it goes wrong 12:35
The fix could have been much easier but I do want to get rid of the hard coded . and blib search paths which is now possible 12:36
lizmat psch: if I add nqp::say("sunk") to Mu.sink, it *does* get called instead of A.sink
psch lizmat: with --optimize=off too? 12:38
lizmat yup
psch huh. well, we do call .sink explicitly in main.nqp... 12:39
on the final result though. not sure that explains Mu.sink instead of A.sink though
arnsholt What do we think about a test file (for stresstest, for obvious reasons) with 2^32-1 tests? =) 12:48
It's the obvious way to smoke out any further bugs like the on [Tux] found earlier: make sure that all possible byte sequences up to length 4 round-trip correctly 12:49
Which would make it more than 2^23, come to think of it
lizmat do you have an estimate for its runtime ? 12:50
I mean, if it's going to take an hour, that might not sit well with the release manager
arnsholt Nope. But I suspect it's a bit overkill, even for stresstest, yeah
In my defence, I meant it mostly in jest =) 12:51
lizmat I'm only against it because of the runtime, I like the idea
arnsholt Yeah, thus the mostly in jest
I'll see how long it takes if and when I get around to hacking on that stuff
It might smoke out bugs in the test harnesses too, come to think of it =) 12:52
lizmat yeah, so like I said, the only reason in my view not to do it, would be at release time
even if it does run an hour, I wouldn't mind running it every now and then 12:53
also, you could maybe split it up and have them test in parallel ?
arnsholt Yeah, that could work
At least for the four-byte (and maybe three-byte sequences)
Mmmmmm. 4.3 billion tests 12:55
lizmat perhaps only emitting an ok for every X values checked ? 12:56
arnsholt Yeah, that's an idea too
lizmat or maybe keep the number of tests open ended, and only emit a not-ok for every failure ?
RT #130493 # A.sink not being called, but Mu.sink is 12:57
arnsholt Oh, that's a neat idea!
synopsebot6 Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130493
arnsholt Or even "plan 1", "pass 'done'" at the end and output nok for each failure 12:58
lizmat yup, also an idea :-) 12:59
afk for a bit&
|Tux| is it correct that I heard some rumours about panda being deprecated?
timotimo i don't know if you have heard some rumours 13:03
notviki |Tux|: panda is being replaced for zef in Rakudo Star. That's about it. 13:05
And they aren't rumours; there's blog post about it: blogs.perl.org/users/steve_mynott/2...uture.html 13:06
|Tux| would you advice panda users to switch to zef? 13:09
This is Rakudo version 2016.12-193-gfb4f16166 built on MoarVM version 2016.12-55-gfe110d60
csv-ip5xs 3.042
test 13.044
test-t 5.251
csv-parser 13.860
# expected: Buf.new(61,1,251,34,193,152,7,136,253,183,136,13,116,113,248,143,176,217,172,177,121)
# got: Buf.new(61,1,251,34,88,7,136,253,183,136,13,116,113,248,143,176,217,172,177,121)
193,152 => 88 13:10
notviki Yes, for the same reasons the blog post says zef's gonna be used in R*
arnsholt [Tux]: Yeah, that's the same bug as the one we discussed earlier today (a leader of 193 being a two-byte sequence but only seven bits of codepoint) 13:12
If that's with the updated MoarVM from earlier today, looks like jnthn's bugfix isn't fixy enough =)
notviki m: Buf.new(0b1100_0001,0b1000_0000).decode('utf8-c8').encode('utf8-c8').perl 13:13
camelia ( no output )
|Tux| it is with a new git fetch
notviki m: Buf.new(0b1100_0001,0b1000_0000).decode('utf8-c8').encode('utf8-c8').perl.say
camelia rakudo-moar fb4f16: OUTPUT«Blob[uint8].new(193,128)␤»
notviki m: for ^1000_000 { Buf.new(0b1100_0001,0b1000_0000).decode('utf8-c8').encode('utf8-c8') === Blob[uint8].new(193,128) }; say now - INIT now
camelia rakudo-moar fb4f16: OUTPUT«(timeout)WARNINGS for <tmp>:␤Useless use of "===" in expression ".encode('utf8-c8') === Blob[uint8].new(193,128)" in sink context (line 1)␤» 13:14
notviki m: for ^500_000 { $ = Buf.new(0b1100_0001,0b1000_0000).decode('utf8-c8').encode('utf8-c8') === Blob[uint8].new(193,128) }; say now - INIT now
camelia rakudo-moar fb4f16: OUTPUT«(timeout)»
notviki m: for ^1000 { $ = Buf.new(0b1100_0001,0b1000_0000).decode('utf8-c8').encode('utf8-c8') === Blob[uint8].new(193,128) }; say now - INIT now
camelia rakudo-moar fb4f16: OUTPUT«0.1303087␤»
notviki m: say 130308000/3600
camelia rakudo-moar fb4f16: OUTPUT«36196.666667␤»
|Tux| installed new kernel. must reboot now. bbs 13:15
notviki arnsholt: so it'd be a test that'd take 36196 hours to run on a single core? :}
arnsholt Surely not a problem! =) 13:16
nine Seems like it'd be worth investing a day or 200 into optimization before starting that test... 13:18
arnsholt Yup. 13:19
bartolin nine++ 14:03
Oh, dalek is absent. I just pushed my nqp::div_In patch: github.com/perl6/nqp/commit/0b055b9266 14:13
notviki cool 14:19
cognominal hi, what are synthetics and pseudotiles in MoarVM branch even-moar-jit ? 14:33
lizmat cognominal: synthetics usually refer artificial codepoints generated for characters sequence for which there is no composed version 14:36
pseudotiles feels more like a thing that brrt might be able to answer 14:37
notviki cognominal: we're starting a business producing poor quality synthetic construction materials to fund Perl 6 development ;)
cognominal: here's some writing on the tiles stuff: perl6advent.wordpress.com/2016/12/...-compiler/ 14:38
jnthn lizmat: In the context of the JIT, they mean something else. (I don't know how to explain it, alas. :))
brrt cognominal: a pseudotile is a tile that doesn't have a 'proper' emit function
the terminology 'tile' comes from the procedure that selects instructions (via 'tiling' by which the original tree is 'covered') 14:39
samcv <notviki> cognominal: we're starting a business producing poor quality synthetic construction materials to fund Perl 6 development ;)
good idea
cognominal thx
brrt so the way that works is that we generate a list of objects that can write machine code to the assembler 14:40
and then we loop over that list and ouptut machine code
and a pseudotile is the name for a 'placeholder' of some sort
jnthn brrt: What's a case where we need a pseudotile? 14:41
brrt a 'synthetic' tile is a tile (object) that is not created by the tiling process but afterwards during register allocation (or other things)
well, for one thing, for resolving multiple value paths in IF (in fact, PHI) to one
another one is for setting up the arguments to a function call (that's called an ARGLIST node) 14:42
there are a bunch of these examples
historically, 'pseudotile' included tiles that were introduced *during* tiling, e.g. jumps and labels to handle within-procedure conditional jumps
but that's not really a good way to refer to these 14:43
if i had to call them anything, i'd call them 'structural' tiles, but there is no reason to distinguish these from 'natural' tiles anymore
also, recently, tiles that resolve to a simple register declaration (i.e. the MVMThreadContext always resides in register r14) 14:49
jnthn Yes, a setdispatcherfor op seems to nicely resolve the original issue I was hunting down 14:50
brrt \o/
jnthn Unfortunately, the changes invalidate the JIT of the op 14:53
Some small changes might make it easy to update though
brrt will be happy to take a look 14:54
once i understand what's going on, anyway
jnthn oops, I mis-channeled the last bit ;) 15:00
brrt no worries :-) 15:01
lizmat m: my str @a = <a b c d e>.pick(*); @a.sort # :-( 15:32
camelia rakudo-moar fb4f16: OUTPUT«compare requires a concrete string, but got null␤ in block <unit> at <tmp> line 1␤␤»
lizmat m: my int @a = (1,2,3,4,5).pick(*); say @a.sort
camelia rakudo-moar fb4f16: OUTPUT«3 4 5 1 0␤»
lizmat hmmm 15:33
lizmat will look at this later today or tomorrow
this was not caught by spectests :-(
notviki dalek \o/ 15:34
dalek p: 41f8d8b | jnthn++ | tools/build/MOAR_REVISION:
MoarVM bump for setdispatcherfor op.
16:23
p: 5f40147 | jnthn++ | src/vm/moar/QAST/QASTOperationsMAST.nqp:
Map setdispatcherfor on MoarVM backend.
kudo/setdispatcherfor: dc34413 | jnthn++ | src/vm/moar/ops/perl6_ops.c:
Missing MVMROOT around try_get_lexical.

It may vivify it on demand.
16:25
kudo/setdispatcherfor: 1c3999a | jnthn++ | src/Perl6/Metamodel/Dispatchers.nqp:
Use the new setdispatcherfor op.

In a branch for now, needs MoarVM HEAD, still need to fix up JVM.
p: 918cdb3 | jnthn++ | src/vm/jvm/ (3 files):
Introduce setdispatcherfor op on JVM backend.

So that Rakudo can start using it. Should mean we also get the same set of bug fixes that use of the new op provides for MoarVM also.
16:55
jnthn Hmm, if I build a fresh Rakudo JVM then run it I get 17:02
java.nio.file.NoSuchFileException: /home/jnthn/dev/rakudo/nqp/jvm-install/share/nqp/lib/Perl6/BOOTSTRAP.jar
bartolin jnthn: does it work after 'make install'? 17:03
jnthn That's what I'm trying
(waiting for install-core-dist to complete) 17:04
bartolin irclog.perlgeek.de/perl6-dev/2016-...i_13828563 17:05
jnthn bartolin: Yes, it helps
Ah, you already notified nine about it
bartolin jnthn: nine suggested an untested patch here: irclog.perlgeek.de/perl6-dev/2016-...i_13828633
jnthn OK
I don't have time to look into that issue
Good news is that my patch to fix up nextsame and friends works on JVM too 17:06
bartolin yeah, I hope, I'll find some time soon-ish
\o/
jnthn So we get the fix on Moar and JVM and don't have to fudge/#?if :)
bartolin sounds very good, jnthn++ :-)
notviki just booked all Mondays until end of February off o/ 17:08
Hopefully could get some quality time hacking on Perl 6
dalek kudo/nom: 8d5fe1a | jnthn++ | src/vm/moar/ops/perl6_ops.c:
Missing MVMROOT around try_get_lexical.

It may vivify it on demand.
17:10
kudo/nom: fe34aa8 | jnthn++ | tools/build/NQP_REVISION:
Bump to get new setdispatcher op.
kudo/nom: 0c0dd82 | jnthn++ | src/Perl6/Metamodel/Dispatchers.nqp:
Use the new setdispatcherfor op.

This corrects various bugs arising from the wrong block picking up the dispatcher, by setting the target that it should be captured by. This fixes a number of issues, including use of nextsame in conjunction with multi subs with `where` clauses, and the spooky issue reported as an OO::Monitors bug (which uses callsame) where occasionally the callsame would not, in fact, call anything. That was due to the finalizers being run post-GC, and the finalizer stole the dispatcher, thus why any code that used Failure was more likely to tickle this.
17:14
ast: d667dda | jnthn++ | S12-methods/defer-next.t:
Add test to cover RT #123989.
17:23
synopsebot6 Link: rt.perl.org/rt3//Public/Bug/Displa...?id=123989
lizmat jnthn: HARNESS_TYPE=6 make spectest now has a lot of tests out of order and one test hanging 17:54
*and* the same kind of failure like before :-(
so in that regard, things haven't gotten better :-(
ah, another NQP bump: trying that now 17:55
jnthn lizmat: I'm not even working on it. 17:56
Or any concurrency stuff in the last days 17:57
So if it's gotten worse it's likely something else.
(Last couple of days were Unicode and a deferral bug and a regex engine bug, which feel rather distant from anything harness6 does) 17:58
It'd be interesting to work out when the new failures were introduced. 17:59
About the original failure, do have some clues what it's about.
(Still deciding how to fix that)
But if it's regressed in some other way that will only help so much :( 18:00
dalek ast: cd43168 | jnthn++ | integration/failure-and-callsame.t:
Add a test to cover failure + callsame issue.

  See comment in the test for details.
18:04
kudo/nom: 4b68d30 | jnthn++ | t/spectest.data:
Run integration/failure-and-callsame.t.
jnthn That took nearly all day to find/fix/write tests for... 18:08
At least it nailed an RT or two also
Nasty.
dogbert2: I've tagged rt.perl.org/Ticket/Display.html?id=125135 as testneeded; we can close it up once your test is in :) 18:12
m: gist.github.com/jnthn/2c8d09f38aad...7a79ca718d 18:31
camelia rakudo-moar 4b68d3: OUTPUT«not ok 1 - Can put multi-line Pod on a role␤␤# Failed test 'Can put multi-line Pod on a role'␤# at <tmp> line 3␤# Error: Method 'connect' must be implemented by Test::Antenna because it is required by roles: Test::Antenna.␤»
dalek ast: 13403ee | dogbert17++ | S12-meta/ (2 files):
Test for RT 125135
18:33
ast: 3e2015d | dogbert17++ | S12-meta/ (2 files):
Merge pull request #213 from dogbert17/test-rt-125135

Test for RT 125135
jnthn dogbert2: You got RT perms to close it, or shall i? 18:37
dogbert2 jnthn: plz close it 18:41
I don't have perms, need to get hold of [Coke] first I guess 18:42
hmm, t/spec/S02-types/mixhash failed one test 18:43
jnthn done. thanks 18:45
dalek kudo/nom: d487657 | jnthn++ | src/Perl6/Metamodel/BOOTSTRAP.nqp:
Fix putting Pod onto a role with requirements.

Previously, this led to the role being punned, which is not wanted in the case of attaching/retrieving documentation.
ast: 4031e22 | jnthn++ | S26-documentation/block-leading.t:
Test for RT #130208.
synopsebot6 Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130208
jnthn There, an easier bug. :) 18:47
dogbert2 how many of those are there I wonder 18:48
notviki heh... they're always easy when you know what to look for :P
dogbert2 :) 18:49
I've found, at $work, that seemingly easy problems can be incredibly hard to solve and vice versa 18:50
jnthn Time for some rest. :) 18:52
Might be back later, though I slept awfully last night, so mebbe not. :) 18:53
notviki cpan@perlbuild2~/CPANPRC/rakudo (nom)$ grep -R 'hack' src/ | wc -l 18:57
10
cpan@perlbuild2~/CPANPRC/rakudo (nom)$ grep -ER 'evil.*hack' src/ | wc -l
2
notviki laughs
and two matches for 'shoddy' :}
hm... t/spec/S32-io/socket-host-port-split.t ............................ seems to hang on my VM 19:08
DrForr "Did you sleep good?" "No, I made a few mistakes." 19:23
dalek ast: b3d3a73 | (Zoffix Znet)++ | S32-io/socket-host-port-split.t:
Fudge IPv6 test

The test seems to hang on boxes without ipv6 support
kudo/nom: c13e67b | (Zoffix Znet)++ | src/core/ (2 files):
Replace heuristic type detection with actual type check

Seems to also be 3x faster
19:28
lizmat notviki: good catch 19:31
timotimo neat.
dalek kudo/nom: f9ed730 | lizmat++ | src/core/ (2 files):
Remove unnecessary stubs
19:56
notviki \o/ 19:57
bartolin sent a CLA in order to get a commit bit for rakudo (cc'd lizmat and [Coke]) 20:16
lizmat bartolin==
++
bartolin *g*
lizmat grrr
:-)
notviki bartolin++ 20:42
nine bartolin: about time :) 20:49
dalek ast: 0a7b6c5 | usev6++ | S32-num/rat.t:
Unfudge passing test on JVM (RT #128264)
21:01
synopsebot6 Link: rt.perl.org/rt3//Public/Bug/Displa...?id=128264
dalek ast: fed7442 | usev6++ | S32-array/adverbs.t:
Unfudge passing test on JVM (RT #128123)
21:34
synopsebot6 Link: rt.perl.org/rt3//Public/Bug/Displa...?id=128123
cognominal m: grammar A { token TOP { :my $*T; <t> {say $*T}}; token t { $<t>=t { $*T=$<t> }}}; A.parse('t'); 22:27
camelia rakudo-moar f9ed73: OUTPUT«「t」␤»
cognominal m: grammar A { token TOP { :my $*T; <t> {say $*T}}; token t { $*T=t }}; A.parse('t'); 22:28
camelia ( no output )
cognominal hum probably not a good idea. 22:31
I wanted an idiom for a environment variable to double as an hypothetic. 22:32
more exactly I wanted the hypothetic implicit and the dynvar set when the whole rule matched 22:40
s/environment variable/dynvar/ btw
probably too much magic here 22:41