Geth | rakudo: MasterDuke17++ created pull request #1052: Enable `make t/spec/<file> HARNESS_TYPE=6` to work |
00:22 | |
rakudo/nom: 408b4035ea | (Daniel Green)++ | Configure.pl Enable `make t/spec/<file> HARNESS_TYPE=6` to work |
00:45 | ||
rakudo/nom: 833c29c0f5 | (Zoffix Znet)++ | Configure.pl Merge pull request #1052 from MasterDuke17/allow_harness6_for_single_files Enable `make t/spec/<file> HARNESS_TYPE=6` to work |
|||
MasterDuke | Zoffix: fyi re windows v unix path separators, RT #130226 is about the output when using `--ll-exception` | 01:01 | |
synopsebot6 | Link: rt.perl.org/rt3/Public/Bug/Display...?id=130226 | ||
Zoffix | OK | 01:16 | |
MasterDuke | m: my @a = ^5; my @b = @a[*]; dd @b; my @c = @a[]; dd @c; my @d = @a[0..*]; dd @d | 01:22 | |
camelia | Array @b = [0, 1, 2, 3, 4] Array @c = [0, 1, 2, 3, 4] Array @d = [0, 1, 2, 3, 4] |
||
MasterDuke | could/would those three ways ^^^ ever give different results? | 01:23 | |
m: my @v = ^5; loop (my int $i = 0; $i < 10_000; $i = $i + 1) { @v[] }; say now - INIT now | 01:26 | ||
camelia | 0.1091065 | ||
MasterDuke | m: my @v = ^5; loop (my int $i = 0; $i < 10_000; $i = $i + 1) { @v[*] }; say now - INIT now | ||
camelia | 0.66615758 | ||
MasterDuke | m: my @v = ^5; loop (my int $i = 0; $i < 10_000; $i = $i + 1) { @v[0..*] }; say now - INIT now | ||
camelia | 1.1111908 | ||
Geth | rakudo: MasterDuke17++ created pull request #1053: Make `my @a = ^10; my @b = @a[*]` 30-40x faster |
03:10 | |
rakudo: MasterDuke17++ created pull request #1054: Make `my @a = ^10; my @b = @a[0..*]` 50-80x faster |
03:51 | ||
MasterDuke | those two PRs were inspired by RT #125344 | 03:52 | |
synopsebot6 | Link: rt.perl.org/rt3/Public/Bug/Display...?id=125344 | ||
[TuxCM] | This is Rakudo version 2017.03-71-g833c29c0f built on MoarVM version 2017.03-84-g07448141 | 06:40 | |
csv-ip5xs 3.033 | |||
test 12.305 | |||
test-t 4.840 - 4.856 | |||
csv-parser 12.563 | |||
samcv | is it faster | 06:41 | |
it better be | |||
well it was 13% faster when i tested it compared to march's release | 06:42 | ||
is that in seconds or minutes or what | 06:44 | ||
secs i guess. bbl | 06:49 | ||
timotimo | huggable: speed | 08:26 | |
huggable | timotimo, tux.nl/Talks/CSV6/speed4.html | ||
timotimo | seems like it's barely faster :\ | ||
Zoffix | buggable: speed | 10:26 | |
buggable | Zoffix, āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā āāāāā data for 2017-03-07ā2017-03-31; range: 4.829sā7.664s | ||
Zoffix | Well, seems like we need a better bench :) | ||
āāāšāāāš āāāšāāāš āāāšāāāš āāāšāāāš āāāšāāāš āāāšāāā | 12:41 | ||
āāā | |||
āāā Reminder: the deadline to submit your suggestions and corrections to | |||
āāā IO Action Plan is TOMORROW, April 1st. You can find the Plan at | |||
āāā github.com/rakudo/rakudo/blob/ānom...on-Plan.md | |||
āāā | |||
āāāšāāāš āāāšāāāš āāāšāāāš āāāšāāāš āāāšāāāš āāāšāāā | |||
timotimo | Zoffix: i haven't had the energy to give the plan the attention it deserves :< | 12:42 | |
Zoffix | There were surprisingly few comments on the Plan. I just hope it's not gonna happen that the second I start implementing it, people will start protesting :/ | ||
[Coke] | Zoffix: I plan on reviewing tomorrow, fwiw. | 13:06 | |
Zoffix | What does nqp::unbox_s() do? I see some code using it to unbox Strs into, say, nqp::iseq_s(), but is there any point to doing that? | ||
[Coke]++ thanks | |||
timotimo | yes, there is | 13:07 | |
look at the Str class, it has a str attribute with "is box_target" | |||
jnthn | Hm, but wouldn't we emit the unbox anyway? | ||
timotimo | hm, probably | ||
jnthn | I know that back in the Parrot days there was a notable performance difference | ||
Because of various code-gen differences | 13:08 | ||
But on MoarVM/JVM and probably thus JS there's likely no need to manually unbox, unless you're storing the result of unboxing in a temporary variable to re-use it | |||
Can always check the bytecode produced to be really sure | |||
(pre-compile and then moar --dump) | 13:09 | ||
Zoffix | "Unhandled exception: Bytecode stream corrupt (missing magic string)" | 13:11 | |
Zoffix shrugs | |||
timotimo: well, I don't get it. So what that there's that attribute? | 13:12 | ||
m: use nqp; say nqp::iseq_s("x", nqp::unbox_s("x")) | 13:13 | ||
camelia | 1 | ||
Zoffix | --target=mast has the same number of MAST::Op decont for both versions; is that it? | 13:17 | |
jnthn | I guess it'd be number of unbox_s you'd want to count | ||
Though could just diff the 2 | |||
Zoffix | unbox_s is there only if used | ||
jnthn | Oh | 13:18 | |
'cus it's on a literal probably, so we just generate better code in the first place without it :) | |||
Try it with my $x = 'x'; and then an nqp::unbox($x) or so | |||
Zoffix | Nothing found with this command: perl6 --target=mast -e 'use nqp; my ($x, $y) = "x", "x"; say nqp::iseq_s($x, $y)' | grep unbox_s | 13:19 | |
But there are two unbox_s with this one: perl6 --target=mast -e 'use nqp; my ($x, $y) = "x", "x"; say nqp::iseq_s(nqp::unbox_s($x), nqp::unbox_s($y))' | grep unbox_s | 13:22 | ||
Woodi | Zoffix: so none methods in IO::Handle checks for null ?? but overriding currently relevant methods is invitation to future errors, eg. someone add method but forget to add it override part. maybe throw/check could be added somewhere lower so errors can use it ? | 13:41 | |
Zoffix | Woodi: sounds like a type of problem that is solvable with proper tests rather than a performance penalty, is it not? A .getc repeatedly done on a million-character will need to do a million of such checks. | 13:46 | |
.lines has to do it per line | |||
I guess it already *does* do it lower, but the error is non-descript, since it has no idea what called the lower-level routine | 13:47 | ||
m: IO::Handle.new.getc | |||
camelia | IO::Handle is disallowed in restricted setting in sub restricted at src/RESTRICTED.setting line 1 in method new at src/RESTRICTED.setting line 32 in block <unit> at <tmp> line 1 |
||
Zoffix | ugh | ||
I need an unrestricted bot... | |||
$ perl6 -e 'IO::Handle.new.getc' | 13:48 | ||
read string requires an object with REPR MVMOSHandle (got Scalar with REPR P6opaque) | |||
The problem with mixing in stuff is: (a) it's slowish, is it not? And (b) you can't unmix it | 13:49 | ||
Woodi | re stat: some weeks ago there was article "on" HN that linux do tons of stat per second if some shell env variable for timezone is not set. and 4 stat syscalls for .rwx ?? stat() return struct stat with all info at once... | ||
Zoffix | So right now you can actually a re-open the same handle, but with mixing in stuff approach you won't be able to. | ||
Woodi | and mixin into object is at runtime ? | ||
Zoffix | Woodi: I know it does, but we don't use it. .rwx makes 4 separate nqp calls that use 1 item from stat call | 13:50 | |
timotimo | you can store the original object inside the object itself | ||
Zoffix | Woodi: yeah | ||
m: class Handle { method lines { say "LINES!" }; method close { self.^mixin: role Closed { method lines { say "NO LINES FOR U!" } } } }; my $h = Handle.new; $h.lines; $h.close; $h.lines | 13:52 | ||
camelia | LINES! NO LINES FOR U! |
||
Zoffix | This is the idea basically. I don't know the sanity of it.... | ||
s/NO LINES FOR U/fail "Cannot call .lines on a closed filehandle"/ | 13:53 | ||
Woodi | Zoffix: making .xxx-es macro that do stuff at compile time would be nice :) | 13:54 | |
Zoffix doesn't follow | |||
Woodi: what .xxx-es? | 13:55 | ||
.rwx and stuff? That's result in pretty broken code, considering the data will be valid only when, say, a module was first precompiled :o | |||
Woodi | .rwx = .stat + .r + .w + .x currently ? one way is to add all .xyz nqp calls but maybe some code generation could help :) | 13:56 | |
Zoffix | .rwx = .e + .r + .w + .x curently | 13:57 | |
Woodi | Zoffix: btw. writing things today becouse 1 April it would be tempting to write some strange things ;) | 13:58 | |
postponing deadline to 2.04 could help to clarify eventual strange ideas :) | 13:59 | ||
Zoffix | Or I could just ignore any "strange" ideas. | 14:01 | |
m: say [>] [2, 1]; say [>] [1, 2]; say [>] [1, 1] | 14:02 | ||
camelia | True False False |
||
Zoffix | m: say [>] $[2, 1]; say [>] $[1, 2]; say [>] $[1, 1] | ||
camelia | True True True |
||
timotimo | it's not april 1st yet where i live | 14:03 | |
Woodi | Zoffix: .resolve current behaviour is bad ? at least description is not clear | 14:07 | |
yay ! slurp as slurp \o/ | 14:09 | ||
Zoffix doesn't understand the question | |||
Woodi wasn't questioning anything... | 14:10 | ||
Zoffix | OK. There was a question mark :/ | 14:12 | |
Woodi | ah, right. but it was *-2. IMO proposition for .resolve is not clear for me. why current behavior is bad ? | 14:15 | |
and isn't Cat hufmanized too good ? :) | 14:16 | ||
Zoffix | Woodi: because $foo.resolve doesn't give you a fully-resolved path 100% of the time and you have no way to tell when it doesn't | 14:17 | |
Woodi | Zoffix: "Make :close behaviour the default in IO::Handle" title looks scarry. but proposition is to only change methods that slurp-all by default ? | 14:29 | |
Zoffix | Woodi: all the listed methods: IO::Handle and IO::Pipe routines .slurp-rest, .lines, .words, .comb, and .split | 14:35 | |
[Coke] | unicodable6: help | 14:40 | |
unicodable6 | [Coke], Just type any unicode character or part of a character name. Alternatively, you can also provide a code snippet. # See wiki for more examples: github.com/perl6/whateverable/wiki/Unicodable | ||
Woodi | Zoffix: removing .rwx methods from IO::Handle: check man lstat() - it works on open filehandle. using IO::Path instead is not the same. also no idea what .modified is doing but looks like ActiveRecord functionality :) | 14:58 | |
Zoffix | Woodi: please write all your comments into appropriate spots instead of this chat | 15:00 | |
looks like the upper range for timings in buggable was buggy... Instead of using $max I was using $min + wrong stuff | 15:09 | ||
buggable: speed | 15:18 | ||
buggable | Zoffix, āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā āāāāā data for 2017-03-07ā2017-03-31; range: 4.829sā7.664s; 5% faster | ||
Zoffix | buggable: speed 2 | ||
buggable | Zoffix, āā data for 2017-03-31ā2017-03-31; range: 4.840sā4.856s; ~0% difference | ||
Zoffix | buggable: speed 5 | ||
buggable | Zoffix, ā āāāā data for 2017-03-29ā2017-03-31; range: 4.831sā5.099s; 5% faster | ||
Zoffix | buggable: speed 100 | ||
buggable | Zoffix, āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā data for 2017-02-12ā2017-03-31; range: 4.788sā7.664s; 5% faster | ||
Zoffix | buggable: speed 25 | 15:19 | |
buggable | Zoffix, āāāāāāāāāāāāāāāāāāāāāāāāā data for 2017-03-20ā2017-03-31; range: 4.831sā6.637s; 3% faster | ||
Zoffix | $number after speed => use $number last readings (default is 50) | ||
Geth | rakudo/nom: c360ac276b | (Zoffix Znet)++ | src/core/IO/Path.pm [io grant] Fix smartmatch of Cool ~~ IO::Path Currently the multi slips the Cool into IO::Path.new, which causes exceptions when smartmatching against Iterable Cools, whenever they have more than 1 item in them. Fix by using the value of .IO method as the new path to check against. Cool provides it. Also toss IO::Path:D candidate, since IO::Path is Cool and is handled by the Cool:D candidate |
15:37 | |
rakudo/nom: 0c7e4a0b60 | (Zoffix Znet)++ | 3 files [io grant] Do not capture args in .IO method We don't use them anyway, and avoiding the Capture makes the method ~ 7% faster. |
15:39 | ||
rakudo/nom: a117dfc15d | (Zoffix Znet)++ | docs/2017-IO-Grant--Action-Plan.md Fix typo s/IO::Path.path/IO::Handle.path/ |
15:42 | ||
Zoffix | Woodi: it was a typo. It's not IO::Path.path but IO::Handle.path and it's exactly the same. Because that's exactly what all those methods currently call under the hood. | ||
huh | 15:46 | ||
m: dd qp{foo} | |||
camelia | 5===SORRY!5=== Error while compiling <tmp> Undeclared routines: foo used at line 1 qp used at line 1 |
||
Zoffix | Oh, I guess S32-io/path.t is not run as part of spectest? | ||
`qp{/foo/bar} creates a IO::Path object` | 15:47 | ||
Somewhat indulgent, considering '/foo/bar'.IO does the same and is just 1 char longer to type | |||
timotimo | how about a qwop quoter | 15:50 | |
Zoffix | Whatsitdo? | ||
timotimo | dunno, it makes you walk funny | 15:59 | |
Geth | roast: 8d6ca7a8c9 | (Zoffix Znet)++ | S32-io/io-path.t [io grant] Cover IO::Path.ACCEPTS |
16:05 | |
Zoffix | ZOFVM: Files=1230, Tests=132985, 116 wallclock secs (21.78 usr 3.26 sys + 2436.86 cusr 130.50 csys = 2592.40 CPU) | 16:10 | |
ZOF6VM: Files=1230, Tests=132985, 240 wallclock secs | 16:15 | ||
Zoffix is happy to see same test count for both harnesses :D | 16:16 | ||
dogbert17++ | |||
m: say 240/116 | |||
camelia | 2.068966 | ||
Zoffix | And 6harness is only 2x as slow as 5harness. Pretty good! | 16:17 | |
lizmat | m: role A { method !a { } }; class B does A { method b() { self!a } }; for ^100000 { B.b }; say now - INIT now # private method in role | 18:38 | |
camelia | 0.4760571 | ||
lizmat | m: role A { method a { } }; class B does A { method b() { self.a } }; for ^100000 { B.b }; say now - INIT now # same, but not private | 18:39 | |
camelia | 0.3883969 | ||
lizmat | m: say 0.4760571 / 0.3883969 | ||
camelia | 1.22569748 | ||
lizmat | if a private method lives in a role, the dispatch does not get optimized | 18:40 | |
m: class B { method !a() {}; method b() { self!a } }; for ^100000 { B.b }; say now - INIT now # just in the class | 18:41 | ||
camelia | 0.23204060 | ||
lizmat | m: class B { method a() {}; method b() { self.a } }; for ^100000 { B.b }; say now - INIT now # non-private | ||
camelia | 0.2315761 | ||
lizmat | so, if the private method does live in the class itself, there is not noticeable difference | 18:42 | |
where should I look to get this optimized properly ? | |||
(before I start unprivatizing a lot of Baggy role private methods) | |||
Zoffix | lizmat: dunno if it's actually used but there's dispatch:<!> method in Mu: github.com/rakudo/rakudo/blob/nom/...#L711-L721 | 18:45 | |
samcv | good * | ||
Zoffix | \o | 18:46 | |
lizmat | Zoffix: but that's the problem: it shouldn't be called at all | ||
Zoffix | Ah | ||
lizmat | the fact that the dispatch is taking place at runtime over and over again, is what slows it down | ||
but *only* if the private method is obtained from a role | |||
samcv | just fed the puppy. so cute | 18:47 | |
lizmat | samcv o/ | ||
samcv | she's grown so much | ||
Zoffix | m: BEGIN Mu.^mixin: role { method dispatch:<!> (|) { say "meow" } }; role A { method !a { } }; class B does A { method b() { self!a } }; | 18:48 | |
camelia | ( no output ) | ||
Zoffix | m: BEGIN Mu.^mixin: role { method dispatch:<!> (|) { say "meow" } }; role A { method !a { } }; class B does A { method b() { self!a } }.new.b | 18:49 | |
camelia | ( no output ) | ||
Zoffix shrugs | 18:50 | ||
both private-in-role and private-in-class ASTs have - QAST::Op(callmethod dispatch:<!>) <wanted> a in them... | |||
lizmat | Zoffix: am as confused as you are :-( | 18:51 | |
Zoffix | 4 times in fact :S | ||
ĀÆ\_(ć)_/ĀÆ no idea :) | 18:52 | ||
Geth | rakudo/nom: ae3ff5c9e4 | (Elizabeth Mattijsen)++ | src/core/Baggy.pm Streamline Baggy.new - using nqp ops - about 25% faster - also affects bag() / mix() subs |
19:19 | |
jnthn | Is the call to the private method in the class or in the role? | 19:57 | |
If the call is from the class it should be possible to optimize the same way (though don't know why it wouldn't be happening already, in this csae) | 19:58 | ||
If the call is in the role then it's generic, so the same optimization isn't applicable | 19:59 | ||
lizmat | hmmm... I just had a flapper on t/spec/S17-promise/nonblocking-await.t : You planned 19 tests but ran 9. | 20:28 | |
jnthn: in my example earlier, the call is in the class | 20:29 | ||
and that's 20+% slower because it goes through Mu.dispatch:<!> | |||
also: why wouldn't the opt work if the call is in the role ? | 20:30 | ||
once it is in the class, that shouldn't make a difference, should it ? | |||
jnthn | Hm, if it's not optimizing the call in the class that sounds like something we should be able to do | 20:32 | |
If the call is in the role - because there's only one copy of the code, but many signatures | 20:33 | ||
That is to say, we don't actually copy the code | |||
We compile roles like C#/Java generics, not like C++ templates, if that makes any sense. | |||
And to clarify a bit more, many signatures means many code objects | 20:34 | ||
The way we compile signature binding code in methods in roles is actually different from how we compile it in classes for the same reason | |||
This is also why we emit slightly different code for type-variable lookups even if they are technically just lexical lookups: it's so spesh knows that it can turn them into constants when it specializes on invocant type | 20:35 | ||
lizmat | jnthn: I'm not seeing the signature part: we don't have MMD on private methods, right ? | 20:36 | |
jnthn | No, but the invocant is generically typed (::?CLASS) | ||
And there may be further type vars in effect | |||
lizmat | ah | ||
hmmm... | |||
jnthn | The reason things are really quite pessimal, though, is that the slow path for private method dispatch is...pretty bad | 20:37 | |
And possibly not something spesh nails as well as it should | |||
I don't know without looking at the log | |||
lizmat | but you're saying, that if we have a call on a private method in a role | 20:38 | |
it won't be fixable anyway ? | |||
because I know of cases like that as well | |||
jnthn | Like all role genericity, it's fixable but only via a dynamic optimization, not a static one | ||
That is, you can't do much about it in Perl6::Optimizer (which we do for the class case) | |||
But spesh is in a decent place to | |||
lizmat | right, but spesh currently doesn't | 20:39 | |
jnthn | I'm suspecting not, given the size of the performance difference. | ||
I mean, if everything were inlined it should just be a hash lookup | |||
lizmat | hmmm... profile is orange so it does get speshed apparently, just never jitted | ||
jnthn | Yeah, but that doesn't mean spesh is able to speed up that particular aspect. | ||
(The spesh2 todo list decidedly features making it able to explain what it did/didn't do at a more detailed level) | 20:40 | ||
lizmat | ack | 20:42 | |
brb | |||
back | 20:56 | ||
Geth | rakudo/nom: e7e97c7be3 | (Elizabeth Mattijsen)++ | src/core/Baggy.pm Streamline Baggy!SANITY - make the common case, as in all values are > 0, faster - rewrite using nqp ops - actually throw for negative values (the Failure was eaten before) - makes Hash.Bag about 15% faster |
21:05 | |
MasterDuke | asking when hopefully more people are around: are @a[], @a[*], and @a[0..*] supposed to produce identical output and cause identical side-effects? | 21:32 | |
lizmat | @a[] should return self | 21:34 | |
the others should return a list / seq | 21:37 | ||
jnthn | Yes, this. The zen slice is for forcing interpolation, primarily. | 21:40 | |
lizmat | jnthn: on that note, and in the hyper/race design | 21:41 | |
MasterDuke | ok. looks like most of the tests in roast are just `is`, not `is-deeply`, maybe explains why my change passed | ||
lizmat | shouldn't we just move all of the Any-iterable methods to Seq ? | ||
so we can mirror those for Seqqy ? | 21:42 | ||
the duality feels better that way, to me at least | |||
jnthn | I think a category theorist would disagree with hyper/race being dual... :) | 21:43 | |
I think they are all available on Any as part of the "any item acts as a list of one", though | 21:44 | ||
Really the default implementations would belong in Iterable | 21:45 | ||
And just coercers left behind in Any | |||
(Iterable because we want things like .unique to work on List as well as Seq) | 21:46 | ||
lizmat | jnthn: was more thinking Iterable vs Hyper/Race | 21:47 | |
since Seq's have positional bind failover, why not consider everything a Seq | 21:48 | ||
jnthn | Well, hyper/race are Iterable too | ||
Sequential iterating them or sinking them are the two ways you get the parallel part to dig into its work | 21:49 | ||
lizmat | it still feels like we're missing an opportunity for simplification somehow | ||
jnthn | fwiw, after some time to think on it I'm kinda leaning towards `for` (unless written `hyper for` or `race for`) always being sequential, rather than being infected by the thing it will iterate over | 21:50 | |
lizmat | m: say 42 ~~ Iterable | ||
camelia | False | ||
lizmat | m: say 42.iterator | ||
camelia | <anon|163756496>.new | ||
jnthn | I guess that's how we get the one-element case to just fall out of polymorphism :) | 21:51 | |
And given that, I'm not entirely sure whether moving the implementations of the sequential versions out of Any actually nets us a simplification | 21:52 | ||
lizmat | yeah, but my point is really: if everything can generate an iterator, isn't everything Iterable ? | ||
jnthn | Only in a structurally typed language, which Perl 6 isn't :) | ||
lizmat | well, if we want to run anything like .grep or .unique on a List, we get an iterator for it | ||
jnthn | We can .Str everything but that doesn't make everything a Str | 21:53 | |
samcv | hi jnthn | ||
jnthn | Well, .Stringy woulda been a better example | ||
Because that's actually a role, and Str ain't, so it's not really the same | |||
o/ samcv | |||
The roles more indicate how something should be considered primarily | 21:54 | ||
Rather than what it might be turned in to | 21:55 | ||
But yes, I understand the discomfort from the sequential implementations living in Any while the parallel ones would go elsewhere | 21:56 | ||
lizmat | well, discomfort is a big word | ||
jnthn | I guess it boils down to, we're willing to consider anything a sequential iterable of one, but hyper/race are things you have to opt in to | 21:57 | |
lizmat | it feels to me like something's out of whack, but I can't really put my finger on it | ||
hmmmm | |||
I can foresee a situation in the future where we would have a "please rakudo, figure out yourself if it makes sense to do things in parallel" | 21:58 | ||
use less 'wallclock' | |||
use less 'cpu' | |||
so I don't see the opt in part as a premature optimization, really :-) | 21:59 | ||
premature, because the developer needs to decide | |||
and the developer may very well make the wrong decision | |||
I mean, if the result of a chain is a hash, why wouldn't it race automatically ? | 22:00 | ||
jnthn | Well, just because they say .hyper or .race doesn't mean we aren't allowed to decide to do it sequential if we believe that we're going to come out faster | ||
Unless they explicitly force our hand by being explicit about degree/batch | |||
lizmat | but vice versa would never happen ? | ||
jnthn | Not with the Perl 6 we have today | 22:01 | |
In order to auto-parallelize you need to know it's safe | |||
lizmat | agree, but in the other 90 years to com ? | ||
jnthn | In a hypothetical future Perl 6 where we implement, say, an effect system alongside the type system, or one of the various other ways of conveying safetly, perhaps it would be possible | 22:02 | |
*safety | |||
Or whatever other mechanisms that are inveted in the 90 years to come that feel suitably Perl-ish :) | 22:04 | ||
Whoever is smart enough to figure that out will presumably also be able to figure out how to re-organize iterable chains so they can be hyper-ized after-the-fact | 22:07 | ||
jnthn wanders off to relax for a bit before sleep :) | 22:11 | ||
MasterDuke | m: use Test; my @a = ^3; is-deeply @a[*].WHAT, List; is-deeply @a[].WHAT, Array | 22:16 | |
camelia | ok 1 - ok 2 - |
||
MasterDuke | should i add those ^^^ to roast? | ||
lizmat | eh, is-deeply on types ? | 22:21 | |
"is" is equipped to compare types, afaik | |||
m: use Test; is 42.WHAT, Int | 22:22 | ||
camelia | ok 1 - | ||
lizmat | m: use Test; is 42, Int | ||
camelia | not ok 1 - # Failed test at <tmp> line 1 # expected: (Int) # got: '42' |
||
lizmat | m: use Test; is "Int", Int | ||
camelia | not ok 1 - # Failed test at <tmp> line 1 # expected: (Int) # got: 'Int' |
||
lizmat | m: use Test; is "Str", Str | ||
camelia | not ok 1 - # Failed test at <tmp> line 1 # expected: (Str) # got: 'Str' |
||
lizmat | jnthn: good night | 22:25 | |
lizmat steps away from the keyboard as well | 22:26 | ||
Zoffix | m: use Test; isa-ok "meow", Str | 23:53 | |
camelia | ok 1 - The object is-a 'Str' | ||
Zoffix | m: use Test; does-ok 42, Numeric | 23:56 | |
camelia | ok 1 - The object does role 'Numeric' |