notviki | m: dd 13.0R | 00:33 | |
camelia | rakudo-moar 87fefa: OUTPUT«===SORRY!=== Error while compiling <tmp>Confusedat <tmp>:1------> dd 13.0⏏R» | ||
notviki | really... | ||
m: use Test; is 13.0R % 4.0, 1, "infix:<%> with FatRat and Rat"; | 00:35 | ||
camelia | rakudo-moar 87fefa: OUTPUT«===SORRY!=== Error while compiling <tmp>Confusedat <tmp>:1------> use Test; is 13.0⏏R % 4.0, 1, "infix:<%> with FatRat and » | ||
notviki | Ah, that roast file has "sub postfix:<R>($x) { $x.FatRat }"... | ||
I thought it was some syntax I wasn't aware of:P | |||
u: 🐀 | 00:39 | ||
unicodable6 | notviki, U+1F400 RAT [So] (🐀) | ||
notviki | m: sub postfix:<F> (Rat $r --> FatRat) is tighter(&infix:<==>) { FatRat.new: .numerator, .denominator }; say <0/0>F == <1/0>F | 00:48 | |
camelia | rakudo-moar 87fefa: OUTPUT«===SORRY!===Unknown QAST node type NQPMu» | ||
dalek | rakudo/nom: 73182d4 | (Zoffix Znet)++ | src/core/Rat.pm: | 01:09 | |
rakudo/nom: Fix &infix:<==> on Rationals with 0-denominator | |||
rakudo/nom: | |||
rakudo/nom: The current logic naively cross-multiplies denominator and numerator, | |||
rakudo/nom: and gives nonsense results such as <42/1> == <0/0>. By extension, | |||
rakudo/nom: smartmatch agains Rationals has same issue as well as their use as | |||
rakudo/nom: literals in signatures. | |||
rakudo/nom: | |||
rakudo/nom: Fix by checking if at least one parameter has a zero denominator and | |||
rakudo/nom: using .Num comparison in that case, since at least one param will be | |||
rakudo/nom: an Inf, -Inf, or NaN | |||
rakudo/nom: | |||
ast: d182fdc | (Zoffix Znet)++ | S32-num/fatrat.t: Remove trailing whitespace |
01:10 | ||
notviki | gd | ||
github.com/rakudo/rakudo/commit/73...011169a355 | |||
dalek | ast: eea0bb0 | (Zoffix Znet)++ | S32-num/ (2 files): Test &infix:<==> with 0-denominator Rationals Rakudo fix: github.com/rakudo/rakudo/commit/73...943ae50d41 |
01:11 | |
notviki | *such as <42/1> == <0/0> is True | ||
uhhh... why there are so many fun bugs to fix today! | 02:29 | ||
I need to finish my HNY bot for the #freenode-newyears party... | |||
[Coke] | samcv: can you forward me the email that said they got the CLA? It's still not on the online list. | 03:15 | |
samcv | ok. can you give me your email | ||
[Coke] wonders what this new Inf behavior he saw in scrollback is. | 03:21 | ||
notviki doesn't recall any new Inf behaviour in the scrollback :/ | 03:32 | ||
[Coke] | irclog.perlgeek.de/perl6-dev/2016-...i_13826253 | 03:34 | |
that's jnthn's last comment about it, with some things leading up to it. | |||
m: say Inf.Rat; | |||
camelia | rakudo-moar 73182d: OUTPUT«Inf» | ||
notviki | Ah. It's not new. I think it's from July or something | ||
And it's roundtripping Inf.Rat or something like that, with the Rational[Numeric,Int] object | 03:35 | ||
[Coke] | b: say Inf.Rat; | ||
bisectable6 | [Coke], On both starting points (old=2015.12 new=73182d4) the exit code is 0 and the output is identical as well | ||
[Coke], Output on both points: Inf | |||
[Coke] | b: say Inf.Rat.WHAT; | ||
bisectable6 | [Coke], Bisecting by output (old=2015.12 new=73182d4) because on both starting points the exit code is 0 | ||
[Coke], bisect log: gist.github.com/0e385a1f9be3aa8c79...c940a1bb7e | |||
[Coke], (2016-05-02) github.com/rakudo/rakudo/commit/e2...2a17ea815f | |||
dalek | kudo/nom: 7434a8f | (Zoffix Znet)++ | src/core/Rational.pm: Implement Rational.isNaN Currently we use Real.isNaN that always gives False, however a Rat can be a NaN when both its numerator and denominator are zeroes. |
03:37 | |
[Coke] | I thought that was a Failure, not a NaN. (I thought NaN was only a Num) | 03:38 | |
dalek | ast: fea29a3 | (Zoffix Znet)++ | S32-num/ (2 files): Test Rational.isNaN Implemented in Rakudo in github.com/rakudo/rakudo/commit/7434a8f73e |
03:39 | |
[Coke] | Was there a ticket for that? | 03:40 | |
notviki | [Coke]: nope, I always include tickets in commit messages. | 03:43 | |
[Coke]: it's still a Failure. | |||
m: say 0/0 | |||
camelia | rakudo-moar 73182d: OUTPUT«Attempt to divide by zero using div in block <unit> at <tmp> line 1Actually thrown at: in block <unit> at <tmp> line 1» | ||
notviki | [Coke]: but in some contexts we use the Num value which is a NaN, per IEEE 754-2008 rules | 03:44 | |
[Coke]: I guess it makes sense to think of not what it "is", but what it becomes when we want a particular representation of a result. | 03:46 | ||
*more sense | |||
dalek | rakudo/nom: cb2476f | (Zoffix Znet)++ | src/core/Rat.pm: | 04:11 | |
rakudo/nom: Fix infix:<===> for 0-denominator Rationals | |||
rakudo/nom: | |||
rakudo/nom: Before the fix for infix:<==>[^1], the infix:<===> gave True for any | |||
rakudo/nom: pair of 0-denominator Rationals. Now it no longer does, but there | |||
rakudo/nom: still 2 edge cases exist in infix:<===> that we need to take care of: | |||
rakudo/nom: | |||
rakudo/nom: 1) since <0/0> uses NaN semantics in infix:<==>, we need an extra check | |||
rakudo/nom: for wether both params are <0/0>. We use newly-added[^2] | |||
rakudo/nom: Rational.isNaN for that. | |||
rakudo/nom: | |||
rakudo/nom: 2) Since <42/0> == <99/0> is True, we need an extra check to ensure the | |||
rakudo/nom: numerators match as well. We need that only for 0-denominator Rationals, | |||
rakudo/nom: so we test whether just one parameter's denominator is a zero, since | |||
notviki | github.com/rakudo/rakudo/commit/cb...c6185802bc | ||
dalek | ast: fd7c11b | (Zoffix Znet)++ | S32-num/ (2 files): Test &infix:<===> with 0-denominator Rationals Rakudo fix: github.com/rakudo/rakudo/commit/cb...ae338e5735 |
04:12 | |
rakudo/nom: b3ab375 | (Zoffix Znet)++ | src/core/Rat (2 files): | 05:13 | ||
rakudo/nom: Fix Rational.Range | |||
rakudo/nom: | |||
rakudo/nom: 1) When RT#130427[^1] was fixed, we thought there are no Inf in Rationals, | |||
rakudo/nom: so we excluded the endpoints in Rat.Range. However, Infs *are* present: | |||
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130427 | ||
dalek | rakudo/nom: when the denominator is zero, the value is +Inf with positive numerators | ||
rakudo/nom: and -Inf with negative numerators. So I'm undoing the original fix for | |||
rakudo/nom: that ticket and including Inf endpoints in the returned range. | |||
rakudo/nom: | |||
rakudo/nom: 2) the .Range is present only on a Rat and is missing from the FatRat. Fix | |||
rakudo/nom: this by moving Rat.Range into Rational.Range, so it provides .Range for | |||
rakudo/nom: both types. | |||
rakudo/nom: | |||
rakudo/nom: [1] rt.perl.org/Ticket/Display.html?id=130427 | |||
notviki | github.com/rakudo/rakudo/commit/b3...4bce96f455 | 05:14 | |
dalek | ast: 96cca2d | (Zoffix Znet)++ | S32-num/ (2 files): Redo test for Rational.Range The test was added[^1] a day ago as part of a fix for a ticket[^2] and is not part of 6.c-errata. It's testing incorrect behaviour. See Rakudo commit[^2] for rationale. [1] github.com/perl6/roast/commit/99f7d695a6 [2] rt.perl.org/Ticket/Display.html?id=130427 [3] github.com/rakudo/rakudo/commit/b3...f0cdca4a5c |
05:16 | |
notviki | time for last sleep of the year \o/ | 05:18 | |
samcv | emoji sequences in graphemes coming soon :) | 05:46 | |
running spectest now | |||
m: say "🤷🏻".chars | 05:48 | ||
camelia | rakudo-moar b3ab37: OUTPUT«2» | ||
samcv | m: '('.uniprop('Bidi_Matching_Brackets').say | 06:48 | |
camelia | rakudo-moar b3ab37: OUTPUT«0» | ||
samcv | m: '('.uniprop('Bidi_Mirroring_Glyph').say | ||
oh ok. cool | |||
dalek | ar/zef: 46ef87b | (Steve Mynott)++ | patches/panda.patch: more obvious warning |
08:43 | |
samcv | buggable, help | 09:04 | |
buggable | samcv, tags | tag SOMETAG | eco | eco Some search term | speed | ||
samcv | buggable, prepend | 09:05 | |
hmm don't see an RT for lack of supporting prepend unicode glyphs | 09:06 | ||
should be added | |||
that's the only NYI thing on unicode 9.0 that my pull that should be landing in MoarVM within the next day or so if jnthn pulls it. | |||
dalek | kudo/nom: 7bba13a | usev6++ | / (2 files): [JVM] Make sure $J_LIBPATH is actually set For the JVM backend %nqp_config is not passed to fill_template_file, so we cannot use nqp::libdir directly. |
09:07 | |
kudo/nom: adcfb8b | usev6++ | tools/build/Makefile-JVM.in: [JVM] Fix usage of prefix Rakudo was unable to find BOOTSTRAP.jar because nqp::getcomp('perl6').config<prefix> was empty |
|||
samcv | there's some issue there not sure of why it doesn't break for that seems to ignore that. so we pass all unicode grapheme break tests except for ones with Prepend characters :) even the emoji | ||
oh. tho i forgot the three character code country symbols, only two country codes work now because it parses two codes at a time... so that will require rework of things | 09:08 | ||
[Tux] | This is Rakudo version 2016.12-150-g2c2934784 built on MoarVM version 2016.12-35-g293bda71 | 11:20 | |
csv-ip5xs 3.162 | |||
test 13.481 | |||
test-t 5.234 | |||
csv-parser 13.866 | |||
samcv | hi [Tux] | ||
[Tux] | o/ | 11:21 | |
samcv | \o | ||
moritz | \o | 11:26 | |
I'm kinda unhappy with our Uni type | 11:27 | ||
I'd love to be able to do much more with it | |||
like match regexes etc. | |||
and I wonder if it would be more sensible to give it a different REPR | |||
ideally a non-NFG string | |||
so that things like substr could work | 11:28 | ||
and regex matches | |||
does that sound sensible and feasible without too much effort? | |||
samcv | hmm | ||
non-NFG? | |||
you want non normalized? | |||
moritz | I don't want normalized on the grapheme level | 11:29 | |
(see the JSON issue discussed yesterday if you wonder why) | |||
samcv | i maybe remember seing that a week ago | 11:30 | |
moritz | I don't care much if it's NF(K)C-composed or not | ||
samcv | so the problem is, it changes the codepoints? | ||
moritz | right | ||
samcv | ah | ||
hold on let me look at the bug report | |||
moritz | github.com/moritz/json/issues/25 | 11:31 | |
samcv | yeah looking at it now | ||
moritz | m: say qq["\c[ZERO WIDTH JOINER]"] ~~ / '"' .+? '"' / | ||
camelia | rakudo-moar 2c2934: OUTPUT«Nil» | ||
moritz | m: say qq["\c[ZERO WIDTH JOINER]"] ~~ / '"' <-["]> '"' / | ||
camelia | rakudo-moar 2c2934: OUTPUT«Nil» | ||
moritz | that's an illustration of the core of the problem | 11:32 | |
m: say qq["\c[SPACE]"] ~~ / '"' <-["]> '"' / | |||
camelia | rakudo-moar 2c2934: OUTPUT«「" "」» | ||
samcv | ah | ||
ok i understand the problem | |||
i should test this on my changes to moarvm but i think it probably will be the same result | |||
due to how ZWJ works. | 11:33 | ||
moritz | TimToady, jnthn: input on the discussion above would be appreciated | ||
samcv | moritz, is it only an issue near the delimiters? | ||
moritz | it's not just with zero-width joiner; any combining character right after the first " makes it not match | ||
samcv | like quotation marks | ||
moritz | samcv: yes | ||
samcv | ok | ||
that is good | |||
that means we can maybe workaround it. i was thinking of implementing at least a awesome error for rakudo | 11:34 | ||
moritz | m: say qq["\c[COMBINING GRAVE ACCENT]"] ~~ / '"' <-["]> '"' / | ||
camelia | rakudo-moar 2c2934: OUTPUT«Nil» | ||
samcv | implemented unicode properties searching for matching brakcets and have a warning if it's the incorrect character | ||
moritz | m: say qq["\c[COMBINING GRAVE ACCENT]"] ~~ / :ignoremark '"' <-["]> '"' / | ||
camelia | rakudo-moar 2c2934: OUTPUT«Nil» | ||
moritz | IMHO that is a bug | ||
samcv | and then, we don't support this yet, but unicode 9 has prepend property of graphpme cluster break | ||
moritz | m: say qq["\c[COMBINING GRAVE ACCENT]"] ~~ / :ignoremark '"' <-["]>* '"' / | ||
camelia | rakudo-moar 2c2934: OUTPUT«「"̀"」» | ||
samcv | wait no that shouldn't be an issue. just the ones that combine with the right | 11:35 | |
moritz | ah no | ||
samcv | trying to think of the easiest way to do this | ||
moritz | m: say qq["\c[ZERO WIDTH JOINER]"] ~~ / :ignoremark '"' <-["]>* '"' / | ||
camelia | rakudo-moar 2c2934: OUTPUT«「""」» | ||
moritz | m: say qq["\c[ZERO WIDTH JOINER]"] ~~ / :ignoremark '"' (<-["]>*) '"' /; say $1.chars | ||
camelia | rakudo-moar 2c2934: OUTPUT«「""」 0 => 「」Use of Nil in string context in block <unit> at <tmp> line 10» | ||
samcv | moritz, do you know which part of the code it fails on? | ||
moritz | m: say qq["\c[ZERO WIDTH JOINER]"] ~~ / :ignoremark '"' (<-["]>*) '"' /; say $0.chars | 11:36 | |
camelia | rakudo-moar 2c2934: OUTPUT«「""」 0 => 「」0» | ||
samcv | also moritz | ||
json is supposed to be valid utf-8 right | |||
moritz | yes | ||
samcv | ok then you could argue it's not valid json | ||
moritz | but UTF-8 as an encoding makes no assumptions about well-formedness of the codepoint sequence it encodes, no? | ||
samcv | uhm | ||
ah. so i guess utf-8 not unicode or something hmm | 11:37 | ||
will have to see the RFC | |||
moritz | but anyway, arguing won't bring us anywhere; loads of existing libraries produce output like that | ||
samcv | if it's literally UTF-8 encoding and nothing else | ||
yeah exactly | |||
even if that is the case we still need to do something | |||
moritz | and that it couldn't be parsed as found during a real-world use case | ||
samcv | who has the invalid json needing to be parsed? curious | 11:38 | |
moritz | m: say qq["\c[ZERO WIDTH JOINER]"] ~~ / :ignoremark ('"') (<-["]>*) '"' /; say $0.Str | ||
camelia | rakudo-moar 2c2934: OUTPUT«「""」 0 => 「"」 1 => 「」"» | ||
moritz | samcv: the JSON export of the IRC logger at irclog.perlgeek.de/ produced the json under discussion (not sure it's actually invalid), and AlexDaniel tried to use it in Perl 6 | ||
samcv | ah ok | ||
hmm | 11:39 | ||
well. i can fix this in moarvm | |||
if you wish | |||
unicode doesn't specify how to deal with 'degenerates' | |||
which is what this problem is caused by | |||
moritz | m: say so qq["\c[ZERO WIDTH JOINER]"] ~~ / :ignoremark ('"') (<-["]>*) '"' /; say ' '.samemark($0.Str).ord | ||
camelia | rakudo-moar 2c2934: OUTPUT«True32» | ||
moritz | samcv: I don't know if there's actually a problem in MoarVM | 11:40 | |
samcv | there isn't | ||
moritz | then we shouldn't fix it there | ||
samcv | uhm | ||
not accurate really | |||
unicode doesn't specify handling of degenerates | |||
and that's what this is | |||
so we are 100% free to basically do anything we want within reason | |||
since the two characters don't actually make up a sequence | 11:41 | ||
moritz | m: say so qq["\c[COMBINING TILDE]"] ~~ / :ignoremark ('"') (<-["]>*) '"' /; say ' '.samemark($0.Str).ord | ||
camelia | rakudo-moar 2c2934: OUTPUT«True32» | ||
moritz | m: say so qq["\c[COMBINING TILDE]"] ~~ / :ignoremark ('"') (<-["]>*) '"' /; say $0.Str | ||
camelia | rakudo-moar 2c2934: OUTPUT«True\"̃» | ||
samcv | err wait on second thought. let me check some more specs | 11:42 | |
but. i think we should be fine doing this | |||
moritz | say ' '.samemark(qq["\c[COMBINING TILDE]) | ||
m: say ' '.samemark(qq["\c[COMBINING TILDE]) | |||
camelia | rakudo-moar 2c2934: OUTPUT«===SORRY!=== Error while compiling <tmp>Couldn't find terminator ] (corresponding [ was at line 1)at <tmp>:1------> ay ' '.samemark(qq["\c[COMBINING TILDE])⏏<EOL> expecting any of: ]» | ||
samcv | as in can't ZWJ quotation marks | ||
moritz | m: say ' '.samemark(qq["\c[COMBINING TILDE]]) | ||
camelia | rakudo-moar 2c2934: OUTPUT« ̃» | ||
moritz | m: say ' '.samemark(qq["\c[COMBINING TILDE]]).ord | ||
camelia | rakudo-moar 2c2934: OUTPUT«32» | ||
moritz | I wonder if that's a possible workaround for JSON::Tiny | 11:43 | |
match with :ignoremark | |||
samcv | what does that ignore | ||
jsut things with the 'Mark' property? | 11:44 | ||
moritz | then the joiner or combining mark that's next to the " is lost | ||
samcv | that won't solve the problem | ||
well maybe just for that one | |||
moritz | I don't know exactly | ||
samcv | hmm will have to know what code triggers :ignoremark | ||
moritz | and then I can capture the leading ", and try to extract any combiners from it | ||
samcv | maybe i can add something to be able to do it for other types of things | 11:45 | |
moritz | git grep ignoremark in the nqp sources gives a good overview of the code paths involved | ||
samcv | i'm assuming :ignoremark only ignores 'Mark' property though. i could be incorrect | ||
moritz | (maybe with -C5 or so) | ||
samcv | i will take a look moritz | 11:46 | |
moritz | thanks samcv | ||
samcv | hmm so it looks like ignoremark may do what we want | 11:47 | |
m: say '\x[200D]t' ~~ m:ignoremark/t/ | 11:49 | ||
camelia | rakudo-moar 2c2934: OUTPUT«「t」» | ||
samcv | m: say '\x[200D]t' ~~ m/t/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«「t」» | ||
samcv | m: say 't\x[200D]' ~~ m/t/ | 11:50 | |
camelia | rakudo-moar 2c2934: OUTPUT«「t」» | ||
moritz | m: say "t\x[200D]" ~~ m/t/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«False» | ||
samcv | m: say '"' ~~ m/'"'/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«「"」» | ||
moritz | m: say "t\x[200D]" ~~ /t/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«Nil» | ||
moritz | m: say "t\x[200D]" ~~ rx:ignoremark/t/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«「t」» | ||
samcv | m: say "t\x[200D]" ~~ m/t/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«False» | ||
samcv | m: say "t\x[200D]" ~~ m:ignoremark/t/ | 11:51 | |
camelia | rakudo-moar 2c2934: OUTPUT«「t」» | ||
samcv | yeah ignoremark is what you want | ||
kind of badly named ignore mark | |||
moritz | it's still rather ugly | ||
samcv | but yeah | ||
which part? | |||
of a fix? | |||
moritz | the part where I have to look at the capture left delimiter to extract the combining characters that have been matched along with the delimiter | 11:52 | |
samcv | also there could be another problem | ||
moritz | if I had a non-NFG string, I could ignore that whole topic, and the grammar would work as intended | ||
samcv | m: say "\c[DEAD]" | ||
camelia | rakudo-moar 2c2934: OUTPUT«===SORRY!=== Error while compiling <tmp>Unrecognized character name DEADat <tmp>:1------> say "\c[DEAD⏏]"» | ||
samcv | m: say "\x[0xDBFF]" | 11:53 | |
camelia | rakudo-moar 2c2934: OUTPUT«===SORRY!=== Error while compiling <tmp>Unrecognized backslash sequence: '\x'at <tmp>:1------> say "\x[0⏏xDBFF]" expecting any of: argument list double quotes hex character term» | ||
samcv | m: say "\x[DBFF]" | ||
camelia | rakudo-moar 2c2934: OUTPUT«Error encoding UTF-8 string: could not encode codepoint 56319 in block <unit> at <tmp> line 1» | ||
samcv | yep | ||
that is valid json but it won't parse | |||
because moarvm rejects it | |||
just based on its range and not being used in utf-8 | 11:54 | ||
though since we store them not in utf-8 but as graphemes i don't see why we couldn't have that work. but | |||
m: Uni.new(0x20, 0x55) ~~ /' '/ | 11:56 | ||
camelia | ( no output ) | ||
samcv | m: say Uni.new(0x20, 0x55) ~~ /' '/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«「 」» | ||
samcv | that works fine | ||
but it prolly stringifies it | |||
say Uni.new('a'.chr, 0x200D) ~~ m/t/ | 11:57 | ||
m: say Uni.new('a'.chr, 0x200D) ~~ m/t/ | |||
camelia | rakudo-moar 2c2934: OUTPUT«Cannot convert string to number: base-10 number must begin with valid digits or '.' in '⏏a' (indicated by ⏏) in block <unit> at <tmp> line 1Actually thrown at: in block <unit> at <tmp> line 1» | ||
samcv | huh | ||
what did i do wrong | |||
m: say 0x200D | |||
camelia | rakudo-moar 2c2934: OUTPUT«8205» | ||
samcv | m: say Uni.new('a'.chr, 0x200D) ~~ m/t/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«Cannot convert string to number: base-10 number must begin with valid digits or '.' in '⏏a' (indicated by ⏏) in block <unit> at <tmp> line 1Actually thrown at: in block <unit> at <tmp> line 1» | ||
samcv | wtf | ||
samcv cries | 11:58 | ||
bisectable6, say Uni.new('a'.chr, 0x200D) ~~ m/t/ | 12:01 | ||
bisectable6 | samcv, On both starting points (old=2015.12 new=2c29347) the exit code is 1 and the output is identical as well | ||
samcv, gist.github.com/16050c448f034f3334...53f11f12ea | |||
samcv | :\ | ||
bisectable6, say Uni.new(0x200D) ~~ m/t/ | |||
bisectable6 | samcv, Bisecting by output (old=2015.12 new=2c29347) because on both starting points the exit code is 1 | ||
samcv, bisect log: gist.github.com/78ad36dd206848c03a...decce7cdc4 | |||
samcv, (2016-09-27) github.com/rakudo/rakudo/commit/22...0f14b9c05c | |||
samcv | m: print "👨🏿⚕️" ~~ /"\c[BOY]"/ | 12:04 | |
camelia | rakudo-moar 2c2934: OUTPUT«Use of Nil in string context in block <unit> at <tmp> line 1» | ||
samcv | m: my $var= "\c[BOY]"; say $var | 12:05 | |
camelia | rakudo-moar 2c2934: OUTPUT«👦» | ||
samcv | m: my $var= "\c[BOY]"; print "👨🏿⚕️" ~~ /$var/ | ||
camelia | rakudo-moar 2c2934: OUTPUT«Use of Nil in string context in block <unit> at <tmp> line 1» | ||
samcv | m: 👨🏿⚕️".uninames.say | ||
camelia | rakudo-moar 2c2934: OUTPUT«===SORRY!=== Error while compiling <tmp>Bogus statementat <tmp>:1------> <BOL>⏏👨🏿⚕️".uninames.say expecting any of: prefix term» | ||
samcv | m: "👨🏿⚕️".uninames.say | ||
camelia | rakudo-moar 2c2934: OUTPUT«(MAN EMOJI MODIFIER FITZPATRICK TYPE-6 ZERO WIDTH JOINER STAFF OF AESCULAPIUS VARIATION SELECTOR-16)» | ||
samcv | oh it's not boy emoji my bad | ||
m: print "👨🏿⚕️" ~~ /"\c[MAN]"/ | 12:06 | ||
camelia | rakudo-moar 2c2934: OUTPUT«👨» | ||
samcv | curious how that performs with my unicode 9 update to moar | ||
doesn't match | |||
moritz, ignoremark works fine \o/ | |||
lizmat | . | 12:36 | |
dalek | kudo/nom: 8fa6d97 | usev6++ | src/core/Regex.pm: [JVM] Type Uni is not usable on rakudo-j |
12:43 | |
kudo/nom: f0c0b07 | lizmat++ | src/core/Regex.pm: Merge pull request #980 from usev6/jvm_regex_uni [JVM] Type Uni is not usable on rakudo-j |
|||
moritz | the good news is that the :ignoremark approach seems to work in my examples | 12:54 | |
the bad one is that it doesn't seem to work in JSON::Tiny | |||
and I don't know yet why | |||
samcv | moritz, can you gist it using ignoremark so i can try and investigate? | 13:01 | |
moritz | samcv: I think I found the culprit | 13:02 | |
m: say so qq["\c[COMBINING TILDE]"] ~~ / ^ :ignoremark '"'/ | 13:03 | ||
camelia | rakudo-moar f0c0b0: OUTPUT«True» | ||
moritz | m: say so qq["\c[COMBINING TILDE]"] ~~ / ^ :ignoremark \"/ | ||
camelia | rakudo-moar f0c0b0: OUTPUT«False» | ||
moritz submits bug report | |||
dalek | kudo/nom: 1dc0c01 | lizmat++ | src/core/Proc/Async.pm: Simplify Proc::Async.new Since we're using all .bless, we might as well use all of its features. |
13:22 | |
ast: 4cb7c23 | ronaldxs++ | S12-methods/accessors.t: fix test description typos in S12-methods/accessors.t |
13:42 | ||
ast: 23de427 | ronaldxs++ | S12-methods/accessors.t: Merge pull request #210 from ronaldxs/fix-S12-methods-accessors-typo fix test description typos in S12-methods/accessors.t |
|||
nine | I wonder if trying to prevent precomp files from depending on repository implementations loaded during precompilation is really worth it. There are probably only one or two of them in use anyway and loading them is quite fast. | 14:08 | |
Zoffix | \o | 14:16 | |
dalek | ast: 7df6625 | ronaldxs++ | S03-operators/precedence.t: Superscript exponent precedence tests (#205) * add precedence and associativity tests for exponentiation by unicode superscript * add RT# for superscript exponent associativity * localize some variables and make one test todo instead of skip - implementing requested changes |
14:23 | |
bartolin runs a spectest for rakudo-j (it builds on HEAD now) | 14:28 | ||
\o/ | |||
Zoffix | \o/ | 14:29 | |
MasterDukeLaptop | lizmat: have you tried github.com/rakudo/rakudo/pull/977 again? i just re-built NQP and rakudo and re-ran the spectest. aside from new tests (i didn't rebase the PRs), everything installed cleanly and passed the tests | 14:33 | |
bartolin++ | |||
bartolin | nine: I observed that I can use rakudo-j only after 'make install'. otherwise rakudo dies because it's unable to find BOOTSTRAP.jar. (you made commit b96bf5bd to avoid that error during 'make'). | 14:35 | |
nine: on rakudo-m it seems to work because we have a '--nqp-lib=blib' in the shell script 'perl6' | |||
nine: do you have an idea how to get the old behaviour for rakudo-j back? | 14:36 | ||
dalek | p: 799d160 | (Pawel Murias)++ | src/vm/ (2 files): [js] Fix bug in serialization of NFAs. |
14:38 | |
nqp: aa7308c | (Pawel Murias)++ | src/vm/js/nqp-runtime/reprs.js: | |||
nqp: [js] Set the name on REPRS. | |||
pmurias | how was rakudo-j fixed? | 14:39 | |
bartolin | pmurias: the primary fix was this: github.com/perl6/nqp/commit/e73c94f69c (though there were two or three other things broken in rakudo land) | 14:42 | |
nine | bartolin: I'd rather we do the same trick in the perl6-j shell script | ||
bartolin | nine: ok, thanks. I'll have a look at it -- unless someone beats me at that | 14:43 | |
notviki | .ask geekosaur what were the problems with just using 1/0, -1/0, 0/0 for roundtripping? irclog.perlgeek.de/perl6-dev/2016-...i_13826225 | 14:47 | |
yoleaux2 | notviki: I'll pass your message to geekosaur. | ||
notviki | I don't see not producing a Rat from .Rat is better than that | ||
m: my Rat $r = Inf.Rat | 14:48 | ||
camelia | rakudo-moar 1dc0c0: OUTPUT«Type check failed in assignment to $r; expected Rat but got Rational[Num,Int] (?) in block <unit> at <tmp> line 1» | ||
notviki | m: use MONKEY; augment class Num { method Rat2 { self == Inf ?? <1/0> !! self == -Inf ?? <-1/0> !! self.isNaN ?? <0/0> !! self.Rat } }; say Inf.Rat2.Num | 14:51 | |
camelia | rakudo-moar 1dc0c0: OUTPUT«Inf» | ||
notviki | m: use MONKEY; augment class Num { method Rat2 { self == Inf ?? <1/0> !! self == -Inf ?? <-1/0> !! self.isNaN ?? <0/0> !! self.Rat } }; say Inf.Rat2 | ||
camelia | rakudo-moar 1dc0c0: OUTPUT«Attempt to divide 1 by zero using div in block <unit> at <tmp> line 1Actually thrown at: in block <unit> at <tmp> line 1» | ||
notviki | And you still have the failures on divide by zero, which this commit message says we wanted to keep? github.com/rakudo/rakudo/commit/49...2017d779f3 | 14:52 | |
pmurias | bartolin: good job on fixing up rakudo-j | ||
notviki | m: my $v = <42/0>; say $v === $v.Num.Rat | ||
camelia | rakudo-moar 1dc0c0: OUTPUT«False» | ||
notviki | And we don't preserve the original form anyway right now, so roundtripping a 1/0 for 42/0 is similar | 14:53 | |
notviki shrugs | |||
nine | bartolin: this (untested) patch might help getting you started: gist.github.com/niner/782c5c908315...fe6bfb496f | 14:54 | |
bartolin | great, nine++ | 15:02 | |
dalek | ast: b8bae29 | moritz++ | S05-capture/match-object.t: RT #130458: regex-matching against an NFD Uni |
15:15 | |
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130458 | ||
dalek | ast: 1823c69 | moritz++ | S32-str/numeric.t: RT #130450: "a".Int returns a Failure |
15:20 | |
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130450 | ||
dalek | ast: 3025653 | moritz++ | S03-metaops/reduce.t: RT #128758: numeric reduce operators numify the single argument |
15:25 | |
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=128758 | ||
dalek | ast: 79aed59 | moritz++ | S17-supply/head.t: RT #126824: Supply.head |
15:32 | |
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=126824 | ||
dalek | p: 1807377 | MasterDuke17++ | src/QRegex/Cursor.nqp: Make Cursor's $!name a str Hopefully turning into a native will be more efficient. |
16:31 | |
p: 0ccfb63 | lizmat++ | src/QRegex/Cursor.nqp: Merge pull request #334 from MasterDuke17/make_Cursor_name_a_str Make Cursor's $!name a str |
|||
kudo/nom: c0d8a45 | MasterDuke17++ | src/core/Cursor.pm: Make Cursor's $!name a str Hopefully turning into a native will be more efficient. |
|||
kudo/nom: 9eef565 | MasterDuke17++ | src/core/Cursor.pm: Make Cursor.MATCH about 10-15% faster By converting more of it to NQP ops and pulling some initial variable declarations out of loops (reducing the number of call frames and garbage collections). |
|||
kudo/nom: 8ec578e | lizmat++ | src/core/Cursor.pm: Merge pull request #977 from MasterDuke17/make_Cursor_name_a_str Bump NQP to get the Cursor fixes |
|||
ast: e6aa85b | usev6++ | S (7 files): Unfudge some passing tests on JVM |
16:56 | ||
travis-ci | Rakudo build failed. lizmat 'Merge pull request #977 from MasterDuke17/make_Cursor_name_a_str | 17:10 | |
travis-ci.org/rakudo/rakudo/builds/187918366 github.com/rakudo/rakudo/compare/1...c578e4dbbb | |||
buggable | [travis build above] ☠ Did not recognize some failures. Check results manually. | 17:11 | |
lizmat assumes because travis didn't see the nqp bump yet | |||
notviki | "P6opaque: invalid native access attribute '$!name' in type Cursor for kind str" | ||
Was there an nqp bump? | 17:12 | ||
lizmat | yup that's the one | ||
yes, about 30 mins ago | |||
notviki | ah, now I see it | ||
dalek | ast: e89914d | usev6++ | S (15 files): Fudge for X::Assignment::RO on JVM, RT #130470 |
19:27 | |
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130470 | ||
samcv | hmm. RE the unicmp_s function. it returns an int. i'm thinking of making it return different things | 19:40 | |
it returns -1, 0 or 1. i'm thinking if it falls back to comparing by codepoint instead of any collation value it should return +2 or -2 | |||
thoughts? | |||
or maybe should be the other way around, -2 and +2 for when the collation is used and -1 and +1 when we fallback to by codepoint | 19:41 | ||
or maybe have it return based on which collation level it used in the end. so -1 and +1 for primary, if those are the same then collation level 2 is used for diacritics and such, return -2 +2, for tertiary etc etc | 19:48 | ||
and -4 and +4 if it falls back to codepoint | |||
for the quaternary level | |||
dalek | ast: 6dad257 | usev6++ | S0 (2 files): Fudge some tests for JVM |
19:58 | |
samcv | and need to see how to handle collation values of zero. because atm characters with _no_ collation value show up as zero. but i need a way to distinguish between 0 and no value | 19:59 | |
use nqp; nqp::unicmp_s('a', 'A',7, 0, 0 ) #> -3 | 20:06 | ||
i like that. can easily tell what decided the collation | |||
though also need to think, how to denote that one of the characters has no collation value and it's not that it checked all three properties and those were all equal so went to tertiary for codepoint numerical comparison | 20:07 | ||
japhb | samcv: It's an interesting idea, but I have to wonder if it falls under the "you think it's cute now ..." umbrella, because if we canonicalize it as part of the API, it then becomes something we have to support in the future ... even if we find an implementation that e.g. manages to handle all levels of collation at once and thus doesn't *know* which one was the tie-breaker. | 20:16 | |
samcv | hm | 20:17 | |
true | |||
japhb | (This isn't just an idle thought -- if indeed all collation is in terms of numbers, with fixed ranges per Unicode release, we could use big numbers composed of multiplying the most important one by the size of the next most, and adding them, that sort of thing.) | ||
Just as an example. | |||
samcv | hm | ||
that is not very helpful though | 20:18 | ||
to know what level it went to | |||
could just have -5 +5 mean it doesn't know what it did | |||
japhb | Anyway, the point is mostly ... decide if this is just a useful side effect of *the current implementation* of collation, or if it is something we want to make *required* of all implementations. | ||
samcv | since the only defined levels are primary secondary and tertiary | ||
or maybe something bigger just in case | |||
japhb | All implementations ever. | ||
samcv | idk some number | 20:19 | |
japhb | For example, how are you going to do this on jvm and js? Will these be able to easily produce collation information like that. | ||
? | |||
japhb goes afk for a bit | |||
samcv | so some implementations might only implement the 4th level by codepoint, so could return some value saying they only checked codepoint | 20:20 | |
but if they are doing collation they could set it to 7 which means 3 levels, and if it gets back 'equal' to do it by codepoint and return said value | 20:21 | ||
japhb, in java: | 20:23 | ||
You can set a Collator's strength property to determine the level of difference considered significant in comparisons. Four strengths are provided: PRIMARY, SECONDARY, TERTIARY, and IDENTICA | |||
though getting back the info on which it used idk if you can | 20:24 | ||
well you could do primary then secondary then tertiary but that's more wor | 20:25 | ||
there _has_ to be some way to tell if it's by codepoint or based on the collation property though | |||
-10/10 could just be by codepoint, and then return another value. on backends not supporting it just return the value of the collation level requested | 20:26 | ||
i use a bitmask to choose which levels of collation to do atm. so on moarvm at least you can pick whichever you want even though other ones probably don't support that, like doing level 1 and 3 if you don't want to care about like accent marks or something | 20:27 | ||
lizmat | Files=1159, Tests=54478, 188 wallclock secs (11.29 usr 4.72 sys + 1225.64 cusr 127.61 csys = 1369.26 CPU) | 21:16 | |
lizmat goes back to drinking beer and waiting for the end (or the beginning, depending on perspective) | 21:21 | ||
japhb | lizmat: The beginning of the end? | 23:20 |