Geth | nqp: dffc2ddbde | (Samantha McVey)++ | tools/build/MOAR_REVISION Bump MoarVM to bring in fixes and tests passing again fe1dc84a Merge pull request #556 from MasterDuke17/special_case_crlf_in_MVM_nfg_is_concat_stable f2731339 Special case "\r\n" in MVM_nfg_is_concat_stable 7d7a6b18 Fix bug in indexic_s if expanding cp is the last cp c9b3a35a Add more details on speed improvement to changelog |
07:31 | |
nqp: version bump brought these changes: github.com/MoarVM/MoarVM/compare/2...-gfe1dc84a 713526e023 | (Samantha McVey)++ | docs/ops.markdown |
|||
samcv | ok cool. test should be fixed now | 07:32 | |
travis-ci | NQP build passed. Samantha McVey 'Bump MoarVM to bring in fixes and tests passing again | 07:59 | |
travis-ci.org/perl6/nqp/builds/212630636 github.com/perl6/nqp/compare/a36cc...fc2ddbdebd | |||
NQP build passed. Samantha McVey 'Fix typo in `indexic` op in ops.markdown' | 08:02 | ||
travis-ci.org/perl6/nqp/builds/212630868 github.com/perl6/nqp/compare/dffc2...3526e0239a | |||
[Tux] | This is Rakudo version 2017.03-8-gbfbe4298a built on MoarVM version 2017.03-4-gfe1dc84a | 08:56 | |
csv-ip5xs 3.021 | |||
test 12.463 | |||
test-t 4.838 - 4.842 | |||
csv-parser 13.094 | |||
lizmat | Files=1179, Tests=55902, 193 wallclock secs (11.84 usr 4.68 sys + 1156.42 cusr 106.96 csys = 1279.90 CPU) | 10:50 | |
timotimo | huggable: speed | 11:36 | |
huggable | timotimo, tux.nl/Talks/CSV6/speed4.html | ||
timotimo | buggable: speed | ||
buggable | timotimo, āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā data for 2017-02-24ā2017-03-19; range: 4.788sā7.664s | ||
timotimo | 2017-02-19 10:22:49 test-t 999.999 | 11:37 | |
2017-02-19 10:23:44 test-t 999.999 | |||
that certainly muddies up the graph | |||
lizmat | those two days, test-t didn't compile | ||
*times | 11:38 | ||
perhaps NaN would have been better :-) | |||
[Tux] | :P | 11:39 | |
and it was just one day. You lot acted promptly :) | |||
timotimo | buggable: source | 11:41 | |
buggable | timotimo, See: github.com/zoffixznet/perl6-buggable | ||
timotimo | @recent .= grep: * ne '999.999'; # filter out bogus results | ||
so that's not what that spice in there was | |||
2017-03-14 09:48:29 test-t 5.016 | |||
2017-03-15 23:56:09 test-t 7.664 | 11:42 | ||
2017-03-16 10:09:09 test-t 5.037 | |||
that's the spike | |||
i wonder what happened there | |||
lizmat | I think that was samcv working on lc -> fc ? | 11:44 | |
timotimo | oh, maybe | ||
[Tux] | and my system-upgrade had no measurable influence on timings </pheeuw> | 11:45 | |
Geth | roast/recursive-dir: ddf9ae1c03 | (Wenzel P. P. Peppmeyer)++ | S32-io/dir.t add test for &dir(:recursive) |
12:52 | |
rakudo: gfldex++ created pull request #1045: teach &dir to be recursive |
12:54 | ||
gfldex | who knew I had a commit bit to roast! | 12:55 | |
(that was planned to be a PR too) | |||
moritz | gfldex: you should have a commit bit to just about all repos in the perl6 github organization | 14:50 | |
Geth | rakudo/nom: fbe7ace6fc | (Zoffix Znet)++ | src/core/IO/Path.pm Make IO::Path.dir a multi So module-space can augment it with multies |
15:17 | |
IOninja | ZOFVM: Files=1228, Tests=132877, 126 wallclock secs (21.25 usr 3.36 sys + 2413.49 cusr 283.17 csys = 2721.27 CPU) | 15:21 | |
gfldex | m: use MONKEY-TYPING; augment class IO::Path { multi method dir(:$recursive) { note 'oiā½' } }; dir(:recursive); | 16:00 | |
camelia | ( no output ) | ||
gfldex | m: use MONKEY-TYPING; augment class IO::Path { multi method dir(:$recursive) { note 'oiā½' } }; IO::Path.^compose; dir(:recursive); | ||
camelia | ( no output ) | ||
IOninja | the other candidate wins, 'cause it's earlier in source | 16:02 | |
m: use MONKEY-TYPING; augment class IO::Path { subset Foo of IO::Path where True; multi method dir(Foo: :$recursive) { note 'oiā½' } }; dir(:recursive); | |||
camelia | Potential difficulties: Smartmatch against True always matches; if you mean to test the topic for truthiness, use :so or *.so or ?* instead at <tmp>:1 ------> 3IO::Path { subset Foo of IO::Path where 7ā5True; multi method dir(Foo: ā¦ |
||
IOninja | m: use MONKEY-TYPING; augment class IO::Path { subset Foo of IO::Path where {True}; multi method dir(Foo: :$recursive) { note 'oiā½' } }; dir(:recursive); | ||
camelia | ( no output ) | ||
IOninja | m: use MONKEY-TYPING; augment class IO::Path { subset Foo of IO::Path:D where {True}; multi method dir(Foo: :$recursive) { note 'oiā½' } }; dir(:recursive); | ||
camelia | ( no output ) | ||
IOninja | :/ | ||
m: use MONKEY-TYPING; augment class IO::Path { subset Foo of IO::Path:D where {True}; multi method dir(Foo: :$recursive!) { note 'oiā½' } }; dir(:recursive); | 16:03 | ||
camelia | ( no output ) | ||
gfldex | m: use MONKEY-TYPING; IO::Path.^add_method('dir', my method (:$recursive) { note 'oiā½' }); IO::Path.^compose; dir(:recursive); | ||
camelia | ( no output ) | ||
IOninja | m: use MONKEY-TYPING; augment class IO::Path { subset Foo of IO::Path:D where {True}; multi method dir(Foo: :$recursive!) { note 'oiā½' } }; ".".IO.dir(:recursive); | ||
camelia | ( no output ) | ||
IOninja | weird | ||
gfldex | indeed | ||
IOninja | Ahhhhh | 16:04 | |
gfldex: the bot's restricted. It's not the real IO::Path class | |||
gfldex | it doesn't work on my host either | ||
m: say dir; | 16:05 | ||
camelia | (".cpanm".IO ".local".IO ".npm".IO ".perl6".IO ".perlbrew".IO ".rcc".IO ".ssh".IO "Perlito".IO "evalbot".IO "log".IO "nqp-js".IO "p1".IO "p2".IO "perl5".IO "std".IO ".bash_history".IO ".bashrc".IO "mbox".IO ".lesshst".IO "evalbot.log".IO ".cpan".IO "daleā¦ | ||
ugexe | multi method dir(Foo: :$recursive! where *.so) { note 'oiā½' } }; # don't you mean to *.so the check on :recursive? | ||
(not related to this problem obviously) | |||
gfldex | jnthn: we need a little dispatcher judgement here | 16:06 | |
IOninja | gfldex: the first version? Those won't work 'cause the core candidate wins. Try the ones with the subset | 16:07 | |
That dispatcher thing was already judged. | |||
It's not a bug. | |||
gfldex | so my candidate is just late in the list then? | ||
IOninja | mhm | ||
And you can make it be considered first by making the invocant typecheck more specific | |||
(or add positional args) | 16:08 | ||
gfldex | is it the implicit *%_ that is at play here? | ||
IOninja | I think it's that named args don't play a role in dispatch other than "have any" vs "none" | 16:09 | |
gfldex | that makes overloading quite tricky | 16:10 | |
IOninja | I don't think we want to encourange rampant `augment` use :) | ||
gfldex | well, overloading would introduce a new type object to match on | ||
I can still .wrap | |||
ugexe | I wonder what the performance penalty is for augmenting something with heavy use like IO::Path | 16:13 | |
gfldex | the dispatcher cache should take care of that | 16:14 | |
ugexe | Ah, how does augmenting in general impact performance? | 16:15 | |
(not how as in how much, but as in what effects cause any slowdowns) | |||
gfldex | as I understand it it's just .add_method and .compose. Whereby the latter marks the dispatcher cache as dirty. | 16:20 | |
and since it happens at compile time, that should not matter much anyways | 16:21 | ||
ugexe | I wondered if precomp would get disabled or something like that | ||
gfldex | i can't see why it should. Method dispatch is compile time anyways. | 16:22 | |
you don't need MONKEY-* to use the MOP | |||
ugexe | At the very least wouldn't it need to re-precompile things? Like if Foo::Bar has `use XXX`, and later on in the dependency graph `XXX` is augmented | 16:25 | |
I guess if it only affects dispatching then no... but for some reason I thought it affected more than that | 16:27 | ||
gfldex | at least for now augment only allows to add methods | ||
lizmat | m: say Seq ~~ Positional # shouldn't this be True ? | 21:42 | |
camelia | False | ||
lizmat | m: dd Seq.new(^10 .iterator)[5] | 21:43 | |
camelia | 5 | ||
lizmat | .tell jnthn Shouldn't PositionalBindFailover does Positional ? | 21:44 | |
yoleaux2 | lizmat: I'll pass your message to jnthn. | ||
IOninja | No, because it doesn't support [] willy-nilly. | 21:47 | |
From spec: "The Iterable role implies the ability to support sequential access through iterators. There is also limited support for en-passant use of postcircumfix:<[ ]>, but since some Iterables may only be iterated once, not all Iterables are considered Positiona" | |||
l | |||
And IIRC PositionalBindFailover makes a List out of a Seq | |||
m: sub (@foo) { say WHAT @foo }(1...*) | 21:48 | ||
camelia | (List) | ||
lizmat | but only internally | ||
m: my @a = ^10; say @a.rotate ~~ Positional # should this be True ? | 21:49 | ||
camelia | True | ||
lizmat | if @a.rotate returns a Seq, it becomes false | ||
IOninja | m: my $s = (1...*); say $s[0]; say $s[5]; say $s[2]; say $s[0]; say $s[5]; say $s[2]; | ||
camelia | 1 6 3 1 6 3 |
||
IOninja | Oh, I guess it does support willy-nilliness.... | 21:50 | |
lizmat | yes, I think that was one of the outcomes of the GLR | ||
that Seq wraps iterators and thus support positional will-nillyness | |||
IOninja | Ah | 21:51 | |
lizmat | that's why Seq has its own AT-POS | ||
IOninja | s: (1...*), 'AT-POS', \(5) | ||
SourceBaby | IOninja, Sauce is at github.com/rakudo/rakudo/blob/fbe7...eq.pm#L193 | ||
IOninja | Ah. I see. | 21:52 | |
lizmat | the reason I'm asking, is that having List.rotate return a Seq, gives some spectest fallout, one of them is caused by Seq not being Positional | 21:54 | |
I was also thinking that Seq.rotate could work for rotation values >= 0 | 22:01 | ||
== 0 would just be self | 22:02 | ||
>0 would basically a caching skip of N that would be "played" after the original iterator expires | |||
rotation values <0 would still need to reify everything first | |||
Geth | roast: b68190d533 | usev6++ | S29-conversions/ord_and_chr.t Use better ticket for fudge message |
22:07 | |
jnthn | No, Seq should not do Positional | 22:14 | |
yoleaux2 | 21:44Z <lizmat> jnthn: Shouldn't PositionalBindFailover does Positional ? | ||
jnthn | The point of the PositionalBindFailover role is so when we fail to bind a Seq to an argument that wants a Positional, we can resolve it by grabbing the cached thing | ||
lizmat | jnthn: should @a.rotate ~~ Positional be True ? | ||
jnthn | Hm, I forget the return value of rotate, but it's in-place, ain't it? | 22:15 | |
lizmat | rotate atm returns a List / Array | ||
I was trying to make it return a Seq | |||
jnthn | Oh, no, it ain't | ||
If it's going to return a Seq then it ain't going to be Positional | 22:16 | ||
lizmat | right, and a spectest then fails | ||
so is the test wrong ? | |||
jnthn | One that explicitly checks for that? | ||
lizmat | yup | ||
jnthn | Well, it depends if we think Seq is a better choice of return type | ||
lizmat | @a.rotate ~~ Positional | ||
for rotate values < 0, it doesn't make sense as it needs to reifiy | 22:17 | ||
jnthn | It'd be a bit weird to have .reverse return a Seq and .rotate not, I guess | ||
I'd thought .rotate was in-place until I just checked the docs now :) | |||
lizmat | but for rotate values > 0, that would basically be a caching .skip(N) and replay the cache at the end | ||
jnthn | *nod* | 22:18 | |
lizmat | hence my attempt at making it a Seq | ||
and getting that type of spectest fails :-( | 22:19 | ||
jnthn | If that's the only spectest fail, and given rotate is probably little-used, we can likely get away with considering it errata | 22:20 | |
I don't really fancy having to explain why .reverse returns Seq and .rotate does not :) | |||
Feels like the odd one out in that sense | |||
lizmat | well, until recently .reverse didn't return a Seq either :-) | 22:21 | |
jnthn | Aye | ||
lizmat | and thunk xx 42 is still problematic :-( | ||
jnthn | xx should certainly be returning a Seq | ||
IOninja | What makes a Positional Positional? Seq supports .AT-KEY and stuff | 22:22 | |
jnthn | I think I'd be fine with rotate returning Seq, given we now have reverse that way | ||
Geth | rakudo/nom: 6060bd38ce | (Elizabeth Mattijsen)++ | src/core/Any-iterable-methods.pm Streamline Any.unique - don't return when doing pull-one - don't create native str representation - add sink-all - makes it a few percent faster |
||
jnthn | IOninja: It's the role that goes with the @ sigil | ||
MasterDuke | fwiw, .rotate is used ~25 in the ecosystem | ||
jnthn | So, things that you can consider as array-ish | 22:23 | |
And so stateful | 22:24 | ||
Seq is not like this, because it's one-shot | |||
IOninja | OK. | ||
jnthn | Introducing something that was a one-shot Iterable was at the heart of the GLR's eliminating lots of memory retention surprises | 22:25 | |
(Pre-GLR, I - and I suspect most others - found that a hard to reason about.) | |||
lizmat | jnthn: do you consider the PositionalBindFailover a temporary or a final solution ? | ||
fwiw, it feels like a temporary solution to me | 22:26 | ||
jnthn | No | ||
Heh | |||
lizmat | one that may have an infinite lifetime | ||
but still :-) | |||
jnthn | Well, the alternative was that we hard-coded a list of types into the binder :) | 22:27 | |
And when somebody came along and asked "how do I make a type that behaves like Seq when passed to an @foo" then answer would be "tough luck" :) | 22:28 | ||
Sticking a role there means we can see "oh, you just..." | |||
lizmat | ok, I can see that | ||
but should Seq be doing that role in the end ? | |||
jnthn | Sure | ||
HyperSeq also does it, fwiw | 22:29 | ||
Aside from it being open to other types joining the scheme, it's also a bit faster to check for that role than to check for Seq and HyperSeq | 22:30 | ||
lizmat | ack | ||
jnthn | The cases when you'd actually be doing somewhere where you need to implement the role are pretty few and far between :) | 22:31 | |
You'd pretty much have to be implementing your own list-processing paradigm or something | |||
jnthn is still pondering whether we'll want a RaceSeq to go with HyperSeq or whether that doesn't want to be type-encoded... | 22:32 | ||
On the discussion about augment earlier - I suspect use of augment probably needs to imply "no precompilation" | 22:34 | ||
lizmat | jnthn: wouldn't a RaceIterator not be simply a protected pull-one ? | ||
*consist of | |||
jnthn | No | ||
Race is just a form of hyper that doens't care about order | 22:35 | ||
But it still needs to be involved in the batching and, if we care for it to perform decently, pipelining stuff. | |||
(That is, .race.grep(...).map(...) should batch things up and run the map/grep on those things together, not do a fork/join between every stage) | 22:36 | ||
The start of a .race just pulls input values using the normal iterator API | 22:37 | ||
lizmat | jnthn: in the case of race, I was more thinking along a RaceIterator with a protected pull-one, being run from N threads at a time | ||
with the result being put into a ConcQueue, and a resulting iterator eating off of that | 22:38 | ||
wouldn't that allow for minimum overhead ? | |||
or maybe this is an altogether different paradigm | 22:39 | ||
jnthn | Sounds perhaps different again | ||
I'm not sure where the batching goes in that model | |||
lizmat | there would be no batching | ||
jnthn | I think the model you're suggesting is much more like what the ==> operator is meant to support | 22:40 | |
Where it sets up producer/consumer between the stages | |||
And there it would make sense to stick concurrent queues between | 22:41 | ||
That's an OK model when the operations are relatively costly | |||
lizmat | so a racy version of ==> :-) | 22:42 | |
jnthn | But in .grep(*.some-simple-property) then the cost of communication would dwarf the test | ||
Thus the batching under the hyper/race paradigm | 22:43 | ||
Hm, yeah, you're right that ==> does care about order | 22:44 | ||
lizmat | jnthn: can't put my finger on it, but it feels over-engineered and a bit klunky still :-) | ||
jnthn | What, the batching? | ||
lizmat | yeah | ||
I mean, you're batching values so you can use the single-threaded version of a method on it, and then collect the result again, right ? | 22:46 | ||
and you need to offset the batch size with the overhead of setting up / calling the single threaded version of the method | |||
but you don't know the setup / calling overhead of every method | 22:48 | ||
so your batching size optimum will heavily depend on what you will be doing | |||
jnthn | That's in no small part why we optionally allow the user to pick it, though with time we can do reasonable heuristics on it | 22:49 | |
Also even *then* there's no single right answer | |||
Because it's a latency/throughput knob | |||
lizmat | indeed | ||
and that's why I'm feeling uneasy with it | |||
feels to me we're trying to row against the stream, rather than go with the flow | 22:50 | ||
so I was thinking, maybe we need some HyperIterator role that would have a 'process-one' method | 22:51 | ||
given a value to process, and return what it is supposed to | |||
in the case of a failing grep, an empty slip | 22:52 | ||
in the case of a map, some values, possibly a Slip | 22:53 | ||
jnthn | Guess I can only say in my expereince dealing with these things, being able to trade off latency and throughput was the difference between a decent user expereince an an uncceptable wait to first result (in the latency direction) or parallelism being able to win something over sequential for various cases (in the throughput direction) | 22:55 | |
If I'd not be able to pick my preference, I'd just not have been able to apply it in one use-case or the other | |||
lizmat | but isn't that about limiting the resource usage of a race/hyper more than anything else ? | 22:57 | |
and you picking latency / throughput just another way of limiting resource usage ? | |||
aka, is that the right knob :-) | |||
wouldn't you need to be able to say: use 50% of CPU maximally, and only 10% of disk IO ? | 22:58 | ||
or something like that ? | |||
jnthn | No, it wasn't really resource usage at all that was the issue. It was more that in one case, the computation was producing results that a user was waiting to see, and some data earlier mattered more than all data faster. In the other case the goal was just to chug through the operation ASAP. | 23:01 | |
The batching (also named splitting or partitioning elsewhere) thing is not something I've come up with, fwiw. It's also what things like PLINQ and Java 8 parallel streams, which are in the very same problem space, have to offer. | 23:02 | ||
lizmat | good to know1 | 23:03 | |
! | |||
grr... I'm tired, so I probably should discussing parallel things :-) | |||
jnthn | Prolly Clojure also, but been a good while since I looked at that :) | ||
Though they seemed to have quite a nice story in this area also | 23:04 | ||
Yeah, I've had a cold this weekend | |||
Sore throat, sore ears, headache. Not much fun. :/ | |||
So should probably go rest that off too | |||
Didn't notice it was already midnight... | |||
lizmat | yeah, time flies | 23:06 | |
jnthn | Doubly so at a weekend, it seems... | 23:07 | |
Anyways, going for another go at sleeping off the coldy thingy. :) | 23:08 | ||
'night | |||
lizmat | nighty night! | 23:10 |