Geth | rakudo/nom: ab3162c127 | (Zoffix Znet)++ | src/core/Numeric.pm Fix infix:<before>/infix:<after> for Complex Currently before/after uses `cmp` that's pretty liberal with allowed types, which makes comparisons like `i after 42` "work". Per TimToady++[^1], these should use infix:«<=>», which this patch makes it do. However, one side effect of this is that <42+42i> after <41+42i> used to work, ... (5 more lines) |
02:21 | |
roast: af93bfd4c0 | (Zoffix Znet)++ | S03-operators/relational.t Test `before`/`after` with Real/Complex Accompanies commit github.com/rakudo/rakudo/commit/ab3162c127 |
02:22 | ||
rakudo/nom: 6c92994729 | (Zoffix Znet)++ | src/core/Numeric.pm Revert "Fix infix:<before>/infix:<after> for Complex" This reverts commit ab3162c1275f2519a2b349ac9c3528697e6c730f. I misunderstood. |
02:36 | ||
roast: bd0e07bac7 | (Zoffix Znet)++ | S03-operators/relational.t Revert "Test `before`/`after` with Real/Complex" This reverts commit af93bfd4c0ccd497451fe8c3b5e4a7c928bd1eec. I misunderstood. |
02:37 | ||
[Tux] | This is Rakudo version 2017.01-138-g6c9299472 built on MoarVM version 2017.01-25-g70d4bd53 | 07:27 | |
csv-ip5xs 2.847 | |||
test 12.180 | |||
test-t 4.889 - 2nd 5.039 | |||
csv-parser 13.258 | |||
lizmat | Files=1175, Tests=55869, 187 wallclock secs (11.25 usr 4.68 sys + 1102.90 cusr 114.08 csys = 1232.91 CPU) | 07:44 | |
Geth | rakudo/nom: f2b97b0ec3 | (Elizabeth Mattijsen)++ | src/core/Rakudo/Iterator.pm Add R:It.ReifiedList.skip-at-least For faster skipping along a reified list. |
10:11 | |
rakudo/nom: 18e6f0d6d5 | (Elizabeth Mattijsen)++ | src/core/Rakudo/Iterator.pm Add R:It.Empty.skip-one/skip-at-least For even faster skipping along the empty iterator |
|||
lizmat | afk& | 10:16 | |
Geth | rakudo/nom: 87f61d9694 | (Elizabeth Mattijsen)++ | src/core/Rakudo/Iterator.pm Introducing R:It.ReifiedArray Basically a streamlined version of the Array.iterator for the fully reified case. Added specific support for skip-one and friends, like we have in R:It.ReifiedList |
12:51 | |
rakudo/nom: c069f4598b | (Elizabeth Mattijsen)++ | src/core/Array.pm Use R:It.ReifiedArray in Array.iterator This could have some beneficial effects, specifically in the if @a { } and +@a cases. |
|||
brokenchicken | What's the price paid for additional multi candidates? | 13:11 | |
Like if I choose to add another multi instead of adding a conditional in a existing multi, what negative effect is there? | 13:12 | ||
lizmat_ | if it allows compile-time candidate selection, there's almost always a plus | 13:14 | |
lizmat | or faster run-time selection | ||
or caching | |||
brokenchicken | But does it end up using more RAM or anything like that? | ||
lizmat | well, it would be another entry in the dispatch table :-) | 13:15 | |
so yes, more ram | |||
does it offset the ram used for the condition? no idea | |||
brokenchicken | Also, I often notice code that avoids using a semicolon at the end of a statement. Is that on purpose because there's some benefit or it is simply because in those cases the semicolon is not required? | 13:16 | |
lizmat | to me it indicates that it's a return value | 13:17 | |
but that's just me | |||
there is no functional difference | |||
brokenchicken | OK. Thanks. | ||
jnthn | I tend to use omit the ; on the last line of a block/routine when it's a return value | 13:35 | |
Purely as a matter of convention | |||
Geth | rakudo/nom: 833fe43da6 | (Elizabeth Mattijsen)++ | 2 files Give List/Array their own dedicated .tail I looked at it after seeing it being recommended on StackOverflow. Turned out the generic Iterable.tail based solution had disadvantages for reified Lists/Arrays. This is now no longer the case. For reified List/Arrays, .tail is about 8x as fast as [*-1], and .tail(N) is about 16x as fast as [*-N .. *-1] (on a 10 element list). |
13:37 | |
[Coke] | jnthn: I mean, it's gotta be slightly faster to compile, no? :) | 13:44 | |
jnthn | Well, it's one less char to parse, so technically yes :P | 13:46 | |
[Coke] | WOOHOO | 13:55 | |
masak .oO( purely a matter of convection ) | 13:57 | ||
[Coke] | ... so we have a few degrees of freedom there? (</stretch>) | 13:58 | |
masak | :P | ||
brokenchicken | cpan@perlbuild2~/CPANPRC/rakudo (nom)$ ./perl6 -e 'say i ~~ (-500000000000000000..2000000000000000)' | 14:05 | |
False | |||
cpan@perlbuild2~/CPANPRC/rakudo (nom)$ ./perl6 -e 'say i ~~ (-500000000000000000..200000000000000000000)' | |||
True | |||
Does this look weird to anyone? | |||
masak | ...a little? | 14:06 | |
brokenchicken | heh | ||
masak: so you'd expect it to give False in both cases? | |||
masak | aye | 14:07 | |
brokenchicken | OK | ||
masak | I'd expect that | ||
jnthn | wtf :) | 14:09 | |
brokenchicken | jnthn: consequence of <=> scaling tolerance for ignoring imaginary part based on both args | ||
arnsholt | Well, that falls out from the "imaginary parts negligeable" part of Complex compare brokenchicken highlighted earlier today, no? | 14:10 | |
Yeah, that | |||
brokenchicken | m: say i <=> 2000000000000000 | ||
camelia | rakudo-moar 833fe4: OUTPUT«Can not convert Complex to Real: Complex is not numerically orderable in block <unit> at <tmp> line 1Actually thrown at: in block <unit> at <tmp> line 1» | ||
brokenchicken | m: say i <=> 200000000000000000000 | ||
camelia | rakudo-moar 833fe4: OUTPUT«Less» | ||
arnsholt | That's super-weird, IMO | 14:11 | |
brokenchicken | yeah | ||
arnsholt | It should just throw in all cases, I think | ||
masak | <=> yes; ~~ no | ||
arnsholt | I guess ~~ is different, yeah | 14:12 | |
brokenchicken | Complex.Real and by extension >/</<=/>= do the same type of thing, but use the default $*TOLERANCE | ||
arnsholt | But if Range ~~ uses <=> internally, it'll throw =) | ||
Or perhaps not, given that it doesn't explode in the example above | 14:13 | ||
brokenchicken | It's a failure. And I took care of it in my example :) | ||
arnsholt | Heh | ||
brokenchicken | cpan@perlbuild2~/CPANPRC/rakudo (nom)$ ./perl6 -e 'for ([<0+0i>, -1..10],) { dd "{.[0].perl} ~~ {.[1].perl}"; "{.[0].perl} ~~ {.[1].perl}".EVAL.say; say .[0] ~~ .[1] }' | 14:45 | |
"<0+0i> ~~ -1..10" | |||
True | |||
False | |||
wtf? | |||
jdv79 | timotimo: any luck on grinding that conc related bug | 14:46 | |
? | |||
brokenchicken | something doesn't decontainerise... | ||
DrForr | My suspicion would be that 0+0i is two values, 0 and 0i. | 14:47 | |
brokenchicken | DrForr: no, it's a complex literal. But it gives different result depending on whether I feed it as is or via an array | 14:48 | |
m: dd <0+0i> | |||
camelia | rakudo-moar 833fe4: OUTPUT«<0+0i>» | ||
brokenchicken | m: dd WHAT <0+0i> | ||
camelia | rakudo-moar 833fe4: OUTPUT«Complex» | ||
DrForr | Okay, I thought I'd seen instances where it was a number created from Re and Im parts, but I may havae spent too much time in the grammar. | 14:49 | |
brokenchicken | hm, sticking deconts as nqp::decont(.[0]) ~~ nqp::decont(.[1]) doesn't fix it :S | ||
Ah | 14:50 | ||
It's aliasing stuff. The .[1] ain't what I meant it to be | 14:51 | ||
ZOFVM: Files=1224, Tests=132845, 178 wallclock secs (22.34 usr 3.16 sys + 3453.93 cusr 258.35 csys = 3737.78 CPU) | 15:09 | ||
Woodi | [6~[5~ | 15:25 | |
Geth | rakudo/nom: f2894d311c | (Zoffix Znet)++ | src/core/Range.pm Fix smartmatch of Complex against Ranges For non-Int Ranges, ACCEPTS uses `before`/`after` that utilizes `cmp` semantics and so we end up with weirdness such as i ~~ 0..1 giving True. TimToady++ suggested[^1] to use `<=>` instead, however, its scaling tolerance for ignoring imaginary part feels a bit too weird in Ranges[^2], ... (6 more lines) |
15:34 | |
roast: 490dbe4634 | (Zoffix Znet)++ | S02-types/range.t Test Complex smartmatch against Range Rakudo fix: github.com/rakudo/rakudo/commit/f2894d311c |
15:41 | ||
[Coke] | our pod to html might be borked. Seeing lots of constructs uselessly wrapped in <p> tags. | 15:51 | |
validator.w3.org/nu/?doc=https%3A%...rl6.org%2F | |||
... moving to #perl6, whoops | 15:52 | ||
timotimo | jdv79: sorry, no luck so far :( | 16:21 | |
jdv79: though could you see if turning off spesh will make your ungolfed code work or crash? | 17:04 | ||
jdv79 | MVM_SPESH_DISABLE? | 17:13 | |
timotimo | yup | 17:14 | |
if that helps, we can try the spesh bisect tool to figure out what exact thing b0rks it | |||
jdv79 | first try seems to validate your view | 17:59 | |
but im busy atm to try again | 18:00 | ||
*too | |||
TimToady | here's the current statistics on dynvar overhead: gist.github.com/anonymous/495fa6ad...6e5ed53a1e | 18:09 | |
timotimo | thanks jdv79 | ||
my internet connection is currently ... the absolute crap | |||
TimToady | tl;dr is that we're a bit under 4% overhead, but the current cache mechanism is obviously overstressed still, with an average of 13 frames to find a cached copy of $*W, for instance | ||
timotimo | oof. | 18:10 | |
$*W is pretty expensive, then | |||
that's Quite Bad, then | |||
since it's needed all the time | 18:11 | ||
TimToady | and every time we look up @*comp_line_directives, it's not there, and we average 155 frames to figure that out | ||
timotimo | i don't even know what comp_line_directives is | ||
is that #?line blah 0? | |||
TimToady | beats me, it's only in nqp | ||
timotimo | could very well be it | ||
not quite sure why it'd be dynamic in scope, if it is what i think it is | |||
TimToady | dynvars have definitely been a bit of an attractive nuisance over the years | 18:12 | |
timotimo | is 55787 the number of lookups in total? | 18:13 | |
and all of them hit N, i.e. "not there"? | 18:14 | ||
brokenchicken | $*HAS_YOU_ARE_HERE hah | 18:15 | |
TimToady | F = found in frame, C = found in cache (earlier frame), N = not found, I = found in an inlined block | ||
timotimo | i might look into getting rid of the comp line directives dynvar | 18:16 | |
TimToady | well, this really tells us two different things | ||
first, we overuse dynvars | |||
but second, the general dynvar overhead is too much | |||
timotimo | it's surprisingly high up in the list, and i'm pretty sure it's only used in the core setting | ||
what were you running to get these stats? | |||
TimToady | MVM_DYNVAR_LOG=/tmp/dlog on the parser step | 18:17 | |
timotimo | well, the parser step of what? :) | ||
TimToady | (which does little IO, or $*STDOUT would be way up there) | ||
the setting | 18:18 | ||
timotimo | oh, huh? | ||
but the core setting uses line directives! | |||
TimToady | and then I have a program that analyzes the log | ||
timotimo | what the ... | ||
okay now i use mosh to connect to my irc client | 18:19 | ||
that should make things more bearable | |||
brokenchicken | So that 4% only affects setting compilation and nothing in user code? | 18:20 | |
m: say 72*.04 | 18:21 | ||
camelia | rakudo-moar f2894d: OUTPUT«2.88» | ||
brokenchicken | Oh wait no, it affects user code too, duh :P | 18:22 | |
timotimo | all user code that uses dynamic variables too much would suffer worse performance until the dynvar caching stuff is bettered | ||
TimToady | I should maybe try this on something that is heavy in $*STDOUT or $*STDERR | 18:23 | |
timotimo | .o( rc-forest-fire ) | ||
TimToady | hah | ||
well, if a program is only using one dynvar, the cache will work pretty okay, I expect | 18:24 | ||
timotimo | ought to, yeah | ||
jnthn | TimToady: fwiw, I think $*W could also go on the cursor | 18:32 | |
timotimo | i wonder how hard it'd be to run the same measurements but pretend that one given dynvar wasn't a dynvar | 18:35 | |
perhaps an env var could be introduced that prevents a single name for a dynvar to reach the cache at all? | |||
then we could compare how other dynvars would move around in that case | |||
without taking the necessary steps to install it in a proper place | 18:36 | ||
and so we could automatically do it for every dynvar that exists and pick out the one that's worth the most | |||
just a random thought | 18:37 | ||
TimToady | jnthn: yes, I'm gonna put a lot of those into a Braid object that floats along where we currently just have $!actions | ||
in fact, actions can go in that object too | 18:38 | ||
then there's no more overhead to copying the braid pointer than we have currently copying the actions pointer | |||
and things like pragmas and other stuff can go in there too | |||
timotimo | and "are we in a core setting?" | 18:39 | |
TimToady | that too | ||
btw, on 1000 generations of forest fire, $*OUT never caches, but is N with 20 frames per lookup average, so we can do better there | 18:40 | ||
timotimo | never caches o_O | ||
maybe it tends to cache in frames that are going to be thrown out immediately anyway? or "never gets installed at all"? | |||
jnthn | It doesn't cache because it's not on the call stack, but in PROCESS::<$OUT> | ||
timotimo | oh, huh | 18:41 | |
TimToady | well, it's looking 20 frames to figure that out every time | ||
timotimo | that information doesn't land in the cache, so that'd be a good step forward perhaps? | ||
jnthn | Well, unless the cache has a way of saying "we don't have it on the stack" | ||
TimToady | and that's a pretty shallow program | ||
jnthn | Which would be reasonable. | ||
TimToady | yes, we need to cache negatives too | 18:42 | |
whatever the scheme | |||
jnthn | Trouble with caching results from PROCESS is that it might be rebound | ||
TimToady | not supposed to be | ||
jnthn | Well, if we're willing to forbid that... :) | 18:43 | |
TimToady | well, a good cache would just have a link straight to PROCESS::<$OUT> | ||
jnthn | How general is it, though? In Test::Scheduler I relied on being able to rebind PROCESS::<$SCHEDULER> for example | ||
TimToady | m: PROCESS::<$OUT> = $*ERR | 18:45 | |
camelia | ( no output ) | ||
TimToady | we could maybe restrict to assignment, so we always have the same container | ||
jnthn | That could work | ||
TimToady | then the cache can point to the container | ||
jnthn | *nod* | ||
Yeah | |||
TimToady | in general it would be good to have a better idea of which dynvars are readonly though | 18:47 | |
what do you think of the idea of adding dynvar cache entries to a (supposedly immutable) lexpad on the fly | 18:49 | ||
that is, instead of having a dedicated hash in a caching frame, just use the lexpad | 18:50 | ||
we'd still presumably have a pointer in the current frame to the nearest caching frame, since we don't want to cache everything in every frame | 18:51 | ||
brokenchicken | nqp: say(nqp::div_i(10000000000000000, 4)) | ||
camelia | nqp-moarvm: OUTPUT«468729856» | ||
TimToady | where a caching frame is the nearest frame tht is associated with a 'my $*foo' of some sort | ||
(ignoring $/, $_, and $! for the moment) | |||
timotimo | yeah, $/, $_, and $! would probably trash that | 18:52 | |
TimToady | which we probably exempt from the dynvar cache | ||
and just scan frames for them, as currently | 18:53 | ||
brokenchicken | m: use nqp; say(nqp::isbig_I(10000000000000000)) | ||
camelia | rakudo-moar f2894d: OUTPUT«1» | ||
brokenchicken | hm | ||
m: say int.Range | |||
camelia | rakudo-moar f2894d: OUTPUT«-9223372036854775808..9223372036854775807» | ||
brokenchicken | So that range is bogus? Our int can have max around 20 bits? | 18:54 | |
s: int, 'Range', \() | |||
SourceBaby | brokenchicken, Sauce is at github.com/rakudo/rakudo/blob/f289...nt.pm#L157 | ||
TimToady | maybe isbig is about whether we steal half the pointer or not | ||
brokenchicken | m: say log 9223372036854775807 / log 2 | 18:55 | |
camelia | rakudo-moar f2894d: OUTPUT«44.0347852958582» | ||
timotimo | yup | ||
there's an XXX in there | |||
"someone please check that on a 32bit platform" | 18:56 | ||
brokenchicken | m: say log(9223372036854775807) / log 2 | ||
camelia | rakudo-moar f2894d: OUTPUT«63» | ||
timotimo | oh | ||
it's actually about fitting into an INTVAL | |||
so i expect it to be about 64bit | |||
brokenchicken | m: say log(10000000000000000) / log 2 | ||
camelia | rakudo-moar f2894d: OUTPUT«53.1508495181978» | ||
jnthn | TimToady: The set of lexicals we have is fixed by compile time | 19:03 | |
TimToady | yes, but do we rely on that in a way that would prevent using the data structure as the cache? | ||
jnthn | TimToady: The lexpad is actually an array of fixed size, with a hash held statically mapping names to indexes if we need to do late-bound lookups | ||
MoarVM doesn't have a concept of a not-fixed-at-compile-time lexical. | 19:04 | ||
So yeah, that goes quite deep | |||
But if the ideas is to hang the cache off of a frame that has dynamic lexicals, we would still hang it off the frame, I guess? | 19:05 | ||
TimToady | so, in theory, we could add to the hash, and extend all the arrays (lazily perhaps) with the same offsets for any given dynvar | ||
I'm just trying to save a pointer in the frame | 19:06 | ||
jnthn | I think an awful lot of assumptions hang off that array's size being fixed. | ||
It's even allocated with the fixed size allocator | 19:07 | ||
TimToady | if we didn't steal the lexpad, my scheme has a pointer to the current cache frame, and a hash pointer in cache frames | ||
jnthn | I think I'd prefer the extra pointer in MVMFrame | ||
I mean, this scheme actually *elimiantes* two pointers | |||
TimToady | and in a cache frame, the cache pointer points to the next cache frame up the stack | ||
jnthn | Oh, there is one other worry however. :S | ||
m: sub foo() { my $*a; await bar(); say $*a }; sub bar() { start { $*a = 42 } }; foo() | 19:08 | ||
camelia | rakudo-moar f2894d: OUTPUT«42» | ||
jnthn | In the case we await this isn't so troublesome, but dynamic lookups can come from another thread | 19:09 | |
Unless we say that those cannot use the cache | |||
But if we have a hash and we're fiddling with it on one thread and reading it on another...SEGV coming. | 19:10 | ||
TimToady | is this a GC thing the we can't point to another thread's frame? | ||
*that | |||
jnthn | No, in fact we hold references to frames from other threads all the time | 19:11 | |
It's not that you can't hold a reference. It's that you *can* hold a reference. | |||
And that means you can't assume something you hang off of MVMFrame is only going to be touched by a single thread. | |||
The ->outer of any start block, and most supply blocks, points to an MVMFrame that started life on another thread. | 19:12 | ||
TimToady | well, an update is gonna be pretty quick and simple, from a locking point of view | ||
or we go with some lock-free scheme | 19:13 | ||
there's nothing says the cache has to be standard hash | |||
jnthn | Well, the typical one in this kind of situation is to go immutable | ||
TimToady | especially if we're gonna intern the dynvar names | 19:14 | |
jnthn | So we never actually mutate the hash, we just make a new one and it's the installation of that which is atomic | ||
So something is only ever reading a particular version | |||
TimToady | since there will typically not be so many unique dynvar names | ||
nine | Read Copy Update | ||
jnthn | Provided the GC is managing the hash then it'll take care of there being no dangling references. | 19:15 | |
How many things do we suspect we'll have in the hash, though? | |||
(I'm wondering if there's enough to make a hash worth it.) | |||
TimToady | if we have integers representing names, scanning a short array for a given integer is gonna be pretty cache friendly | 19:16 | |
jnthn | (Since, especially if you intern, you can linear-scan a small number of entries pretty fast.) | ||
Yeah | |||
Of course, the moment we start to intern is the moment we get a new memory leak. :-) | |||
(Though not likely to be an issue in this case.) | 19:17 | ||
TimToady | in the olden days, you'd make a linked list and link the new one in at the front for sharing between the immutable tails | ||
but not so cache friendly | |||
jnthn | Aye. | 19:18 | |
TimToady | I suppose with some history one might just create a frame with the cache entries we think we'll need, but populate the entries lazily | 19:19 | |
so we don't build up from an empty array of integers each time | |||
that is, manage the "namespace" of the integer list much like we do with the names of a lexpad, so all similar frames share the same structure | 19:21 | ||
jnthn | Could do | ||
There's also a bunch of spare static frame flags, fwiw | |||
If we need to convey which frames should have a cache down to the VM | 19:22 | ||
TimToady | well, I'm mostly just trying to preload the branes of whoever are going to do this eventually, which I suspect is not me, given how much I have yet to do on the compiler end of things :) | 19:24 | |
and GC vs frames is still a half a paygrade up from me :) | 19:25 | ||
but it's nice to know there's an "easy" 3% improvement sitting there, anyway | |||
well, I suppose if I do the braid thing right, it could drop below 3% overhead for the parser | 19:26 | ||
but still | |||
timotimo | maybe the fact that it'll fill up cpu caches less will also make things in the periphery a tiny bit faster … | 19:27 | |
one can dream, right? | |||
TimToady | Sure, anything you don't do has the benefit of not adding entropy to the universe. :) | 19:30 | |
(except maybe for the deciding not to do it part...) | |||
my general plan of attack on the language braid issue is much the same as I did with $*ACTIONS, which is to use the current dynvars as the scaffolding to double-check that the new mechanism points to the same values at critical spots, then eventually remove the scaffolding | 19:36 | ||
timotimo | %) | ||
TimToady | in the case of %*LANG, we'll have to decide whether to support people who are currently relying on the %*LANG interface | 19:37 | |
it's not tested in roast, so not officially a part of the language, but at the same time, people have written documents referencing it, if not modules | |||
timotimo | yeah, modules exist that use it | 19:40 | |
TimToady | can maybe do some shallow emulation there in the short term for 6.c | 19:41 | |
had some shallow emulation of $*ACTIONS in there for a while till I decided to just rip out that scaffolding | |||
haven't heard much carpage about that... | |||
but in the case of actions, we already had a documented :actions() interface to .parse | 19:42 | ||
timotimo | aye | ||
TimToady | what's the name of "all the modules" again? | 19:43 | |
timotimo | perl6-all-modules | ||
potentially under moritz/ | |||
TimToady | thanks | ||
I see that I've bitrotted v5 by removing $*ACTIONS, but maybe it was already bitrat | 19:50 | ||
brokenchicken | it was already | 19:51 | |
rat is an acceptable past tense of rot? | 19:52 | ||
TimToady | rit, rat, had rot | 19:53 | |
moritz | for experimental linguists at least :-) | ||
TimToady | yeah, there's 10 or so modules out there that refer to %*LANG, besides v5 | 19:56 | |
moritz | probably Slang:: modules? | 19:57 | |
oh, seems some others too | |||
CompUnit::Util | 19:58 | ||
BioInf # that one surprised me | |||
Control::Bail | |||
TimToady | fortunately, none of them other than v5 do 'my %*LANG' | 20:00 | |
so I should be able to setup up %*LANG such that it can just poke things into the new braid thingie | 20:01 | ||
just need a temporary $*BRAID thing to point %*LANG at the current cursor's braid object | 20:02 | ||
but that has to be 'my $*BRAID' at the same spots we currently do 'my %*LANG'...hmm... | 20:05 | ||
or maybe just keep it in %*LANG<braid>, and then it automatically scopes the same... | 20:06 | ||
and automatically goes away when %*LANG goes away | |||
moritz | so, what's a braid? | ||
TimToady | all the languages associated with the current actual language we're parsing, so MAIN, Regex, Quote, etc | 20:07 | |
I suppose we could call it tag-team languages :) | 20:08 | ||
but I like "braided languages" | |||
in a larger sense, the braid is all those things that define the current language at the current spot in the lexical scope, so includes things like, pragams, current package name, anything that influences the meaning of anything we haven't parsed yet | 20:10 | ||
so the cursor itself represents the current language in the narrow sense of grammar methods, while the braid (which will hang off the cursor as a kind of shared delegate (but that changes more frequently than $!shared does)) represents the current language in the wider sense | 20:12 | ||
TimToady wonders what a pragam is... | 20:13 | ||
moritz | ok, thanks for the explanation | ||
TimToady | so basically the cursor's $!shared delegate is everything shared by a given parse, while $!braid will be everything shared by a lexical scope (or the rest of the lexical scope, when we change it halfway through) | 20:14 | |
we could perhaps combine those, since the whole parse is a kind of lexical scope, but then we'd be copying the same values around everywhere when we know they're never gonna change in an inner scope | 20:16 | ||
whether the occasional copy is more expensive than the extra pointer, who can say? | 20:17 | ||
jnthn figures %*HOW or whatever it's called will end up in the braid too | |||
TimToady | lotsa things going in there, if lexically scoped | 20:18 | |
as you said, maybe the whole world goes along with it | |||
but maybe the world can just be in $!shared | 20:20 | ||
unless we want to start dealing with hypothetical worlds or some such, in which case we might carry it along for the ride | 20:22 | ||
as I say, the division between $!shared and $!braid is really only pragmatic for how much copying we want to do at language boundaries | |||
interestingly, with the braid delegate, we can tweak the current braid using mixins without recalculating the NFAs based on the cursor type | 20:24 | ||
so we could get rid of the hash and just use mixins for the equivalent of %*LANG<Regex> and such | 20:25 | ||
and then the braided languages would be encoded in the braid type, without any attribute, just a method to return the sublang | 20:27 | ||
dunno how much that buys us, though, given how seldom we would actually copy one braid object to anther | 20:28 | ||
and since a braid object is shared over many cursors, probably don't really need to worry much about attribute storage | 20:29 | ||
lizmat | m: sub a(--> 42) { { return } }; dd a # expected 42 | 20:59 | |
camelia | rakudo-moar f2894d: OUTPUT«Nil» | ||
lizmat | am | ||
jnthn: am I wrong in expecting that? ^^^ | |||
jnthn | What were you expecting? | 21:12 | |
Looks right to me. | |||
m: sub a(--> 42) { return }; dd a | 21:13 | ||
camelia | rakudo-moar f2894d: OUTPUT«42» | ||
jnthn | Um | ||
How does that one work? | |||
I thought the --> was just for the fall-off-the-end though... | |||
lizmat | jnthn: apparently not ? | 21:15 | |
m: sub a(--> 42) { return 666 } | 21:16 | ||
camelia | rakudo-moar f2894d: OUTPUT«5===SORRY!5=== Error while compiling <tmp>No return arguments allowed when return value 42 is already specified in the signatureat <tmp>:1------> 3sub a(--> 42) { return 666 7⏏5}» | ||
lizmat | jnthn: so I guess a bare return is sorta expected in this casse | ||
*case | |||
jnthn | m: sub a(--> 42) { if 1 { return 666 } } | 21:27 | |
camelia | ( no output ) | ||
jnthn | Fail | ||
m: sub a(--> 42) { if 1 { return } }; say a | |||
camelia | rakudo-moar f2894d: OUTPUT«Nil» | ||
jnthn | That's a pretty bad discrepancy | ||
m: sub a(--> 42) { return } | 21:28 | ||
camelia | ( no output ) | ||
jnthn | m: sub a(--> 42) { return }; say a | ||
camelia | rakudo-moar f2894d: OUTPUT«42» | ||
jnthn wonders how that works :) | |||
lizmat | rakudobug material ? | 21:30 | |
jnthn | Surely | 21:32 | |
I didn't even know we had that feature :P | |||
lizmat | RT #130706 | 21:45 | |
synopsebot6 | Link: rt.perl.org/rt3//Public/Bug/Displa...?id=130706 | ||
perlpilot | that's interesting. | 21:48 | |
m: sub a(--> 42) { sub { return } }; dd a # weird | |||
camelia | rakudo-moar f2894d: OUTPUT«42» | ||
lizmat | no, to be expected | 21:49 | |
you return out of the inner sub, and fall off the outer one, and that returns 42 | |||
perlpilot | oh, because the inner sub is basically ignored | 21:50 | |
lizmat | ah, and that :-) | ||
m: sub a(--> 42) { return sub { return } } | 21:51 | ||
camelia | rakudo-moar f2894d: OUTPUT«5===SORRY!5=== Error while compiling <tmp>No return arguments allowed when return value 42 is already specified in the signatureat <tmp>:1------> 3sub a(--> 42) { return sub { return } 7⏏5}» | ||
perlpilot | sweet. | 21:52 | |
m: sub a(--> 42) { if 1 { return 5 } }; dd a | |||
camelia | rakudo-moar f2894d: OUTPUT«5» | 21:53 | |
lizmat | m: sub { } # perhaps a nameless sub in sink context should warn ? | ||
camelia | ( no output ) | ||
perlpilot | so, that one should have complained in the same way | ||
TimToady | yes, shoulda | 21:58 | |
Geth | Pod-To-HTML/coke/html-test: 0b44eb2aa4 | (Will "Coke" Coleda)++ | t/09-Html.t Add a test for =pod Html Issue #23 |
22:14 | |
[Coke] | ... wrong window. | ||
MasterDuke | can NQP's src/HLL/Compiler.nqp not use $*W? | 23:27 | |
Geth | star: wchristian++ created pull request #85: note in the readme that platform-issues may exist |
23:45 | |
travis-ci | Rakudo build passed. Zoffix Znet 'Merge pull request #1010 from ronaldxs/new-fancier-fudgeandrun | 23:51 | |
travis-ci.org/rakudo/rakudo/builds/197412611 github.com/rakudo/rakudo/compare/9...53f6770f58 | |||
Geth | star: 5388a95f66 | (Steve Mynott)++ | tools/star/release-guide.pod I did 2017.01 |
23:55 | |
star: 494842a834 | (Christian Walde (Mithaldu))++ | README note in the readme that platform-issues may exist Currently the readme does not mention that there may be issues with the build process, which are documented outside of the tarball. This added sentence mentions that. |
|||
star: e0a5125286 | (Steve Mynott)++ | README Merge pull request #85 from wchristian/patch-1 note in the readme that platform-issues may exist |