00:39 Kaiepi left 00:42 Kaiepi joined 00:59 wildtrees left 01:03 softmoth joined 01:20 guifa2 left 02:18 guifa2 joined 02:24 guifa2_ joined, guifa2 left, guifa2_ is now known as guifa2
[Coke] greppable6: nqp::without 02:45
greppable6 [Coke], Found nothing! 02:46
[Coke] greppable6: without
AlexDaniel [Coke]: it'll take some time to gist that :D 02:48
[Coke] no doubt.
just trying to figure out if the with/without opcodes are used anywhere other than the nqp test suite.
greppable6 [Coke], 8380 lines, 1066 modules: gist.github.com/f47cc0101e69eeb71f...fb4e7ec1ca 02:49
AlexDaniel heh, it did it
I thought it's going to be in the sweet spot of not being enough for it to just bail but enough for it to keep trying forever 02:50
[Coke] wonders if there's a prebuilt recent nqp-jvm somewhere to test things on 02:58
news.perlfoundation.org/post/gc_pr...ls_2020-03 We did have a grant proposal come in for Raku for this period. 03:23
03:56 evalable6 left, linkable6 left, linkable6 joined 03:57 evalable6 joined 06:13 coverable6 left, releasable6 left, reportable6 left, committable6 left, statisfiable6 left, notable6 left, squashable6 left, quotable6 left, unclechu left 06:15 coverable6 joined, releasable6 joined, reportable6 joined, committable6 joined, statisfiable6 joined, notable6 joined, squashable6 joined, quotable6 joined 06:18 unclechu joined 06:48 lichtkind joined 07:16 ufobat__ left
MasterDuke nqp-j: say("hi?") 07:50
08:45 softmoth left 09:32 Altai-man_ joined
lizmat Files=1306, Tests=111236, 212 wallclock secs (28.87 usr 8.20 sys + 3003.32 cusr 265.36 csys = 3305.75 CPU) 09:41
Geth rakudo: 77a2201e4e | (Stefan Seifert)++ | src/core.c/ThreadPoolScheduler.pm6
Fix ThreadPoolScheduler only adding affinity threads in extreme cases

Swapped branches of an nqp::if caused us to only ever consider the highest threshold for adding a new affinity-worker (queue length 100).
09:50
rakudo: 750abe0386 | (Stefan Seifert)++ | src/core.c/ThreadPoolScheduler.pm6
Only take really idle affinity workers straight away in ThreadPoolScheduler

Looking at the queue length alone doesn't tell us whether a worker is idle or not. We also have to take it's working flag into account. Otherwise we may end up picking a busy worker straight away when there's an actually idle one standing by.
10:18 lichtkind_ joined 10:20 lichtkind left
MasterDuke nine: what sort of workloads will those ^^^ commits help with? 10:25
10:26 Altai-man_ left, sena_kun joined
nine MasterDuke: workloads with lots of IO::Socket::Async, Proc::Async and IO::Notification usage. In my case two Cro applications (web frontend using a REST API providing backend) 10:28
MasterDuke nice 10:29
10:47 hankache joined 10:59 hankache left 11:14 Altai-man_ joined 11:17 sena_kun left
nine releasable6: status 11:47
releasable6 nine, Next release will happen when it's ready. 5 blockers. 166 out of 335 commits logged (⚠ 3 warnings)
nine, Details: gist.github.com/2a42dd9d46c2b98237...10654629a7
Altai-man_ nine, looking for blockers to look at? 11:51
nine Doesn't look like any of them are about something I'd work on 11:53
Altai-man_ I think main ones are dispatch, which is on jnthn, and windows segfault. 11:55
nine Which look like they're the same 11:56
Altai-man_ Extremely intricate for sure. 11:58
MasterDuke jnthn has said he didn't think the windows segfault was in fact related to dispatch
iirc
12:26 Kaiepi left 12:27 Kaiepi joined 12:28 Kaiepi left
lizmat so, thinking a lot about my ... (series operator) optimizing work 12:37
I've basically come to the conclusion that I don't want to mimic all of its current quirks in a new version
so I'm considering now to either put this work into a module in the ecosystem 12:38
or making the less quirky behaviour a 6.e thing only
jnthn moritz nine vrurg opinions? 12:39
moritz sounds sensible to me 12:41
lizmat 6.e or module ?
moritz doing it as a module first, and if it works out well, propose it for 6.e 12:42
lizmat suggestions for the name of the module? Operator::Series ? 12:43
moritz Better::Series (j/k) :D
Operator::Series sounds good 12:44
lizmat actually, making it a module, might allow us to remove ... from the core completely at some point in the future, which some of us apparently like to see :-) 12:49
timotimo we should have a namespace that's like ACME, but not for jokes. how about Serious::
`use Serious::Firepower`
lizmat
.oO( phasers on desintegrate )
12:50
timotimo `use Dys::Functional`
lizmat hmmm Sequence::Generator ? 12:54
Altai-man_ `Seq::Generator`?
If it is not misleading.
lizmat inspired by this from the speculcation: 12:56
"The righthand first value is considered to be the endpoint or limit of the sequence that is to be generated from the lefthand side by the C<...> operator itself."
AlexDaniel lizmat: why not use `experimental`?
lizmat what would be the difference with that and putting it in 6.e.PREVIEW ? 12:57
AlexDaniel ah no, 6.e.PREVIEW is even better
lizmat++ # brave soul 12:59
lizmat I think I'm going to go for a separate module now for 2 reasons 13:02
nine Sounds to me like you want to develop it in a module anyway, for the much shorter compile cycles alone
lizmat 1. it makes developing faster (what nine said)
2. it would allow people who are stuck on 6.c or 6.d for some reason, to still use it
nine I should have done that with the ThreadPoolScheduler, too :)
lizmat nine: yeah, maybe that *is* a good point :-) 13:03
ah, and 3: would potentially allow export to a different operator name, for comparisons :-)
nine Oh, yes, that should make debugging a lot easier 13:05
13:15 sena_kun joined 13:16 Altai-man_ left 13:18 Xliff joined
Xliff \o 13:18
Do other language have the Slip concept? As in, are there languages other than raku that can take an array and convert it to positional parameters? 13:19
nine In Python it's foo(*array) 13:21
sena_kun Not sure about Slip, but there are things for doing the second thing in plenty of other languages. Java has ..., python has *, etc.
lizmat of course, Slip is more general than that
Xliff Yes, but those sound like it's from the calling end. I was more concerned with caller. 13:42
:)
Can someone tell my why this is dying with an X::Assignment::RO? 13:46
repl.it/repls/SpatialUtterDemand
Ah. Nevermind. 13:51
14:20 Kaiepi joined
Xliff Anybody know of an algorithm that can pull the largest, most commonly used substring from a text document? 14:36
Hmmm... maybe this might do. 14:38
### /usr/include/gstreamer-1.0/gst/base/gstbasesink.h
sub gst_base_sink_do_preroll (GstBaseSink $sink, GstMiniObject $obj)
returns GstFlowReturn
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_blocksize (GstBaseSink $sink)
returns guint
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_drop_out_of_segment (GstBaseSink $sink)
returns uint32
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_last_sample (GstBaseSink $sink)
returns CArray[GstSample]
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_latency (GstBaseSink $sink)
returns GstClockTime 14:39
is native(gstreamer-base)
is export
{ * }
lizmat oops?
Xliff sub gst_base_sink_get_max_bitrate (GstBaseSink $sink)
returns guint64
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_max_lateness (GstBaseSink $sink)
returns gint64
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_processing_deadline (GstBaseSink $sink)
returns GstClockTime
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_render_delay (GstBaseSink $sink)
returns GstClockTime
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_get_sync (GstBaseSink $sink)
returns uint32
is native(gstreamer-base)
is export
{ * } 14:40
sub gst_base_sink_get_throttle_time (GstBaseSink $sink)
returns guint64
is native(gstreamer-base)
lizmat wonders how long it will take before Xliff is bumped
Xliff is export
{ * }
sub gst_base_sink_get_ts_offset (GstBaseSink $sink)
returns GstClockTimeDiff
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_is_async_enabled (GstBaseSink $sink)
returns uint32
is native(gstreamer-base)
is export
{ * }
sub gst_base_sink_is_last_sample_enabled (GstBaseSink $sink)
returns uint32
is native(gstreamer-base)
is export
14:40 Xliff left
lizmat bisectable6: dd PredictiveIterator 14:42
bisectable6 lizmat, Bisecting by exit code (old=2015.12 new=750abe0). Old exit code: 1
lizmat, bisect log: gist.github.com/0484b1582d5f70fe81...5c8b47bc5b
lizmat, (2019-05-14) github.com/rakudo/rakudo/commit/ec...107b410194
lizmat bisectable6: class A does PredictiveIterator { } 14:43
bisectable6 lizmat, Bisecting by output (old=2015.12 new=750abe0) because on both starting points the exit code is 1
lizmat, bisect log: gist.github.com/84ddca7d7f4485991b...dd33ce9379 14:44
lizmat, (2018-09-09) github.com/rakudo/rakudo/commit/3f...37262026ac
lizmat hmmm... 2.5 years ago, just before 6.d 14:45
guifa2 Xliff: you'd need to define largest and most common. Those are two different ways to score that can go against each other. "abcabcabca" -> "abc" x 3, but "a" x 4. 14:51
tellable6 guifa2, I'll pass your message to Xliff
jnthn lizmat: Please can you make a problem solving ticket detailing the behaivors/quirks you'd like to remove? I think it's also useful to know which ones are actually spec (as in, covered by spectest in 6.d) and which are just properties of the previous implementation. Also ecosystem reliance. 14:55
lizmat will do as soon as the module is fleshed out and I have an initial set of capabilities 14:56
jnthn I mean, if it turns out that most things you'd like to drop have no spectests and nearly no ecosystem dependence, then the question is quite different.
lizmat indeed...
well, one thing I'd like to drop, is chaining
m: dd 1 ... 5 ... 1
camelia (1, 2, 3, 4, 5, 4, 3, 2, 1).Seq
lizmat would have to be written as:
m: dd 1 ... (5 ... 1)
camelia (1, 2, 3, 4, 5, 4, 3, 2, 1).Seq
jnthn Hm, but if the second form is well defined then so is the first, by associativity? 14:57
lizmat m: dd (1 ... 5) ... 1
camelia (1,).Seq
timotimo right, because the last argument is a "end condition" and also a filter
jnthn Or does it associate wrong and then we have a hack to deal with that?
timotimo m: dd 5 ... 1 ... 5 14:58
camelia (5, 4, 3, 2, 1, 2, 3, 4, 5).Seq
timotimo i sure hope we have it as assoc list
lizmat nqp::hash('prec', 'f=', 'assoc', 'list'); 14:59
jnthn Oh goodness, it comes out as &infix:<...>(1, 5, 1)
Right. D'oh.
lizmat m: dd 1 ... 5 ...^ 1 15:00
camelia 5===SORRY!5=== Error while compiling <tmp>
Only identical operators may be list associative; since '...' and '...^' differ, they are non-associative and you need to clarify with parentheses
at <tmp>:1
------> 3dd 1 ... 57⏏5 ...^ 1
lizmat m: dd 1 ...^ 5 ...^ 1 # now what does that mean? 15:01
camelia 5===SORRY!5=== Error while compiling <tmp>
Calling infix:<...^>(Int, Int, Int) will never work with signature of the proto ($, Mu, *%)
at <tmp>:1
------> 3dd 1 ...^ 5 7⏏5...^ 1 # now what does that mean?
jnthn Nothing useful. :)
lizmat indeed
jnthn Is 1 ... 5 ... 1 spec?
lizmat it is tested for
jnthn OK
Yes, when I say spec that's what I mean :) 15:02
lizmat not sure it is documented though :-)
jnthn Docs aren't spec.
lizmat true :-)
jnthn *sigh* The number of language tickets is huge. I think I need to commit to ruling on one a week or something. It'll at lesat be something.
*least
And doesn't feel so overwhelming. :) 15:03
lizmat ++jnthn
15:14 Altai-man_ joined 15:16 sena_kun left
lizmat afk for a few hours& 15:17
15:25 Xliff joined
Xliff lizmat: HA! I bumped myself! ;D 15:25
tellable6 2020-04-19T14:51:35Z #raku-dev <guifa2> Xliff: you'd need to define largest and most common. Those are two different ways to score that can go against each other. "abcabcabca" -> "abc" x 3, but "a" x 4.
Xliff guifa: It would be longest common substring, which would never be just "a", since "abc" is longer. 15:27
guifa2 Ah, LCS is a different problem :-) So you're comparing two strings ? 15:29
Xliff No. I want to extract the longest substring from a given piece of text 15:30
MasterDuke Xliff: isn't that just the given piece of text itself? 15:46
guifa2 I think Xliff is wanting any the longest substring that's repeated at least once 15:48
m: sub lcs(\t) { my Int %r; for 0 ..^ t.chars -> \f { for f ..^ t.chars -> \l { %r{t.substr: f, l}++ }; }; %r.pairs.grep(*.value > 1).sort(*.key.chars).tail.key; }; say lcs("abcdabcdabc")
camelia cdabcdabc
guifa2 It's brute forcing the problem, but it works
Xliff guifa: Yeah, but to do that for a whole piece of text gets long. 15:49
And you are incorrect.
I don't need the longest substring of a string. I need the logest common substring out of a piece of text
Take my almost-foray-into-bumpage, earlier: github.com/Xliff/p6-GStreamer/blob...seSink.pm6 15:50
The algorithm I want should pull gst_base_sink as one of the longest substrings. 15:51
MasterDuke m: say "which is the longest".words.sort(*.char).head(1) 15:52
camelia No such method 'char' for invocant of type 'Str'. Did you mean any of these?
can
chars
chop
chr

in block <unit> at <tmp> line 1
MasterDuke m: say "which is the longest".words.sort(*.chars).head(1) 15:53
camelia (is)
MasterDuke m: say "which is the longest".words.sort(*.chars).tail(1)
camelia (longest)
Xliff Again, not the problem space! 15:55
MasterDuke m: say "which is the longest of the words in the string of words".words.Bag.grep(*.value > 1).sort(*.key.chars).tail(1)
camelia (words => 2)
Xliff Take this for a starting point: repl.it/repls/SpatialUtterDemand
If I start out with that list, I want to get gst_base_sink_ as the longest common substring.
MasterDuke that sounds computationally expensive 16:02
Xliff Yeah. 16:04
Geth rakudo/rakuast: a9bde11299 | (Jonathan Worthington)++ | 6 files
Implement RakuAST support for dynamic variables

Both declaration and access. The lookup is done a little differently from the current compiler, and should be a bit more efficient, at the cost of slightly larger code size.
16:06
guifa2 LCS in any form is a (computationally) hard problem. When you can set out initial conditions for it, there are ways to substantially speed it up. 16:07
For instance, if we can have the assumption that the substring won't cross words boundaries, we can first divide the text up by words, which means the inner loop in my code above won't ever be run more than $largest-words.chars, probably reducing the runtime by two orders of magnitude, and then the sort() will likewise be speed up by at least an order of magnitude
Xliff Yeah. I think I got it.
1) Go through list of common words (ala words that start with a prefix). 2) For first run through, return value is the current return value 3) If the next value compared with the return value has a smaller substring, then that substring is the current return value 16:09
4) return the current return value
Yeah, there is a trick to it. That is prone to bugs if there are multiple substrings in the list. 16:10
I don't know if I need to worry about that in the current space, though.
MasterDuke Xliff: blogs.perl.org/users/damian_conway/...rings.html looks like it might be useful 16:16
Xliff MasterDuke++
Geth Blin: 70c2390250 | (Aleks-Daniel Jakimenko-Aleksejev)++ | bin/blin.p6
Don't write the file on every iteration (oops!)
16:17
AlexDaniel Xliff: ↑ :D
Xliff :p 16:19
AlexDaniel++: Good catch 16:20
MasterDuke: That works! Check repl.it/repls/SpatialUtterDemand 16:22
16:23 softmoth joined
MasterDuke nice 16:24
16:32 Xliff left
Geth Blin: c8ed4d1969 | (Aleks-Daniel Jakimenko-Aleksejev)++ | bin/blin.p6
Indent the :to block to make the code more readable
16:34
16:43 lichtkind_ left 17:15 sena_kun joined 17:17 Altai-man_ left 17:27 MasterDuke left
guifa2 finally got a handle on P6Regex/Grammar.nqp and got a Binex version up and running 17:28
Now to tackle Actions.nqp . Much scarier lol
lizmat guifa2: care to blog about your accomplishments ? 17:31
17:44 MasterDuke joined
[Coke] nqp: nqp::js 17:44
camelia Running JS NYI on MoarVM
at <tmp>:1 (<ephemeral file>:<mainline>)
from gen/moar/stage2/NQPHLL.nqp:1916 (/home/camelia/rakudo-m-inst-1/share/nqp/lib/NQPHLL.moarvm:eval)
from gen/moar/stage2/NQPHLL.nqp:2121 (/home/camelia/rakudo-m-inst-1/share/nqp/li…
[Coke] pmurias: ^^ ?
tellable6 [Coke], I'll pass your message to pmurias
AlexDaniel ehhh tellable6 should probably include a link to irc logs 17:45
pmurias: context: colabti.org/irclogger/irclogger_lo...04-19#l329
tellable6 AlexDaniel, I'll pass your message to pmurias
[Coke] if all an opcode does is throw a NYI exception, should we include it? (I'd vote no given that we already have a variety of implemented codes) 17:49
17:58 [TuxCM] left 18:17 pmurias joined
pmurias [Coke]: hi 18:17
tellable6 2020-04-19T17:44:51Z #raku-dev <[Coke]> pmurias: ^^ ?
2020-04-19T17:45:59Z #raku-dev <AlexDaniel> pmurias: context: colabti.org/irclogger/irclogger_lo...04-19#l329
pmurias [Coke]: re nqp::js it was used as a debugging aid as that inserts a bit of js code verbatim into emitted JS code 18:18
18:18 Xliff joined
pmurias [Coke]: the reason it's stubbed on the moar backend is that when cross compiling parts of NQP where being compiled to moar bytecode and js code at the same time 18:19
[Coke]: but I guess you could remove it at that point from moar as it was added to help debugging at an earler stage of nqp-js development 18:20
although it can be usefull if you are emitting js code from an nqp running on moar 18:21
18:26 pmurias left
guifa2 lizmat: I definitely will 18:30
18:51 softmoth left, softmoth joined
lizmat m: dd 1 ...^ 5.5 # so how do we feel about that *not* excluding 5 ? 18:55
camelia (1, 2, 3, 4, 5).Seq
18:56 softmoth left
timotimo MasterDuke: are you available for another core setting profile with a moarvm patch i have? 18:57
MasterDuke timotimo: sure. on top of your other branch? 19:00
timotimo umm i think it will be 19:01
i thought it was already compiling but i just misread the output 19:02
it'll take a little time still
ok i just pushed it 19:06
MasterDuke interesting, what do you think the effects will be? 19:08
timotimo well if my hope is correct, 20x faster 19:09
oh, if you could "perf record" it as well, that'd be cool
MasterDuke ha, good thing i forgot to disable my swap file 19:14
just started it
19:14 Altai-man_ joined 19:17 sena_kun left
MasterDuke timotimo: hot damn, what did you do?!?! stage parse only took almost twice as long (i.e., 60s instead of the normal 37s) instead of the 930s last time i did the profile 19:28
but we'll see how long it takes to write the profile now... 19:29
oh hm, mark_call_graph_node() doesn't do anything now? 19:36
timotimo that's correct 19:37
because why not make the thing that did 95% of everything before do nothing at all after 19:48
AlexDaniel lizmat: what if we start by saying that it should count forever
m: say (1, * + 1 ...^ 5.5)[^20] 19:49
camelia (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20)
AlexDaniel any other behavior makes it a special case. Maybe it is justified, but if you want to end up with sane behavior it's probably better to start with no special cases and then see where we stand 19:50
reading the docs: “The right-hand side will have an endpoint, which can be Inf or * for "infinite" lists (that is, lazy lists whose elements are only produced on demand)” 20:11
wrong… as if in other cases it'll be eager or something… 20:12
timotimo MasterDuke:i assume writing out the data is still enormously slow?
MasterDuke slow, but definitely faster than before. currently at 2.6g written 20:14
wow, the profiling is done! just perf writing it's data now 20:16
so looks like it was about one hour, right? 20:18
timotimo like many other things, i should have done this years ago
MasterDuke nine should love this
perf says the most expensive at 44% is VMArray_gc_mark. next at 4.2% is MVM_gc_root_add_frame_registers_to_worklist 20:19
timotimo OK
oh, did you get the memory usage peak, too? 20:20
that shouldn't have decreased, i don't think
AlexDaniel “An endpoint of * (Whatever), Inf or ∞ generates on demand an infinite sequence, with a default generator of *.succ” 20:22
again, wrong
MasterDuke it looked about the same as before. all ram used, ~28b swap
AlexDaniel as in, it depends on the elements you have on the left side
timotimo right
MasterDuke i didn't run that other thing. what was it?
timotimo what other thing, you mean printing MVM_profiler_measure_blah in gdb? 20:23
that's not quite as important this time, there was barely a change
MasterDuke smem 20:24
but yeah, didn't do either
AlexDaniel lizmat: there's one interesting thing we can do. So, … first needs to decide whether it is going to call .succ or .pred, so we have the result of this comparison. If the endpoint is not Callable, can't we just keep comparing it like this? 20:25
timotimo ah smem 20:26
i love that tool it's so good
AlexDaniel lizmat: if it's Same then it's our destination, if it's not Same and is different, then we overshot 20:28
nine timotimo: so how does your speedup work? 20:30
timotimo very well, thank you
nine Ah, the master wants to keep his secrets :)
timotimo nah, it's super simple 20:32
you know how the call graph is built up of nodes that each have an MVMStaticFrame pointer?
and that's how we identify what successor node to pick when we're invoking a new routine from one node
AlexDaniel lizmat: though it feels like it's going to break apart either way cuz now we have to make something up if code blocks are involved 20:33
timotimo (that's also how each node's "allocations" counter works)
well, it turns out that if you put a pointer to a gc-managed entity in every node of a tree (plus in these alloc entries that hang off of every node) then you also have to traverse the entire tree on every GC run
nine Aaah...and the array makes that so much simpler and faster 20:34
timotimo because otherwise pointers get outdated and you get 1) duplicate nodes of the same staticframe, and 2) a crash at the end when you're trying to extract name and filename and such from an SF that's wrong
yes, all type and staticframe pointers have moved to a flat array and there's now only an index into the array at every node
no more need to traverse a tree at all
nine brilliant :) 20:35
timotimo the memory access pattern also shrinks drastically from leaving the tree alone unless you're actually actively moving around in it from invoking and returning
though a linear search through the array isn't great, already loads slower than before, where it was literally just a pointer comparison 20:36
next step could be to allocate the succ arrays in the FSA, since they are likely often small 20:48
hmm. can we actually sensibly surpass 32bit counts of allocating stuff? 20:49
hm. how often per second would we have to allocate an object to surpass 32bit in a week, let's say 20:50
m: my $limit = 2 ** 32; my $time = 7 * 24 * 60 * 60; say $limit / $time
camelia 7101.46709
timotimo oh, that is possible
20:51 softmoth joined 21:15 sena_kun joined 21:17 Altai-man_ left 21:25 lichtkind joined
timotimo m: say (2 ** 32 * 200) / 1024 / 1024 21:32
camelia 819200
timotimo m: say (2 ** 32 * 200) / 1024 / 1024 / 1024 21:33
camelia 800
timotimo m: say (2 ** 16 * 200) / 1024 / 1024 / 1024
camelia 0.012207
timotimo m: say (2 ** 16 * 200) / 1024 / 1024
camelia 12.5
timotimo hmpf. with a 32bit index into the types array, we can handle all types if you're spending about 800 gigabytes of your memory on types
but going down to 16 will only give you the ability to handle types filling up 12.5 megs 21:34
now, this is only the STable itself, there'll also be at least one MVMObject that's the WHAT of it
and it's unlikely that you'll have so many types of REPRs that don't have REPR_data 21:35
m: say (2 ** 16 * 224) / 1024 / 1024
camelia 14
AlexDaniel lizmat: hah, interesting case: 21:37
m: say (-1, 2, -4 … -128)[^20]
camelia (-1 2 -4 8 -16 32 -64 128 -256 512 -1024 2048 -4096 8192 -16384 32768 -65536 131072 -262144 524288)
AlexDaniel I don't know why it doesn't stop
I mean, sure, -128 is not in the sequence, but:
m: say (-1, 2, -4 … -127)[^20]
camelia (-1 2 -4 8 -16 32 -64 Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil)
AlexDaniel m: say (-1, 2, -4 … -129)[^20]
camelia (-1 2 -4 8 -16 32 -64 128 Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil Nil)
AlexDaniel so hmmm
timotimo hmmm 21:48
if we expect there's only like sixty-four nodes in the tree that exceed "a crapton of allocations ..." 21:49
the counter can be half-halved into 0b11111...11xxxxx with a little index into yet another array
Geth rakudo: 65fdea7dd3 | (Elizabeth Mattijsen)++ | src/core.c/Mu.pm6
Make Mu.iterator use R:I.OneValue

The comment was out of date. Needs a fix in roast where defined Mu value is improperly tested for.
22:10
roast: 587e01c182 | (Elizabeth Mattijsen)++ | S32-list/iterator.t
Properly test for Mu.new.iterator
22:11
MasterDuke timotimo: should i try again with your new commit? think it'll be noticeably faster? 22:21
timotimo probably not 22:22
perhaps it'll use a teensy bitty less memory
AlexDaniel greppable6: \.\.\. 22:43
22:47 lichtkind left
greppable6 AlexDaniel, 10769 lines, 650 modules: gist.github.com/01ad490d7f6298250e...dbf0c8b34a 22:47
AlexDaniel greppable6: \.\.\.\s*[^}] 22:54
greppable6 AlexDaniel, 7267 lines, 551 modules: gist.github.com/5ce203fae4b8fe89c1...0e6986b4ab 22:56
AlexDaniel hmmmmm 22:58
greppable6: \.\.\.\s*[^}\s]
can I do that?
greppable6 AlexDaniel, 5342 lines, 472 modules: gist.github.com/6bffcc951482abcbf6...967666e8ec 22:59
AlexDaniel that's still too many 23:01
greppable6: ,.*\.\.\.\s*[^}\s]
greppable6 AlexDaniel, 1687 lines, 233 modules: gist.github.com/de6e02e6c9feed182b...0416fd1101 23:02
AlexDaniel moritz: there's no way it works, right?? github.com/moritz/perl6-all-module...ring.pl#L6 23:04
lizmat: ↑ food for thought :)
I'm also surprised how common it is 23:05
to do stuff like
`0.01, * + 0.01 ... 0.1`
instead of `0.01, 0.02 ... 0.1`
though both seem to be pretty popular 23:09
23:14 Altai-man_ joined 23:17 sena_kun left
Geth rakudo/rakuast: e40404285a | (Jonathan Worthington)++ | t/12-rakuast/var.t
Cover default/constraint of untyped scalar decl
23:18
rakudo/rakuast: 7be7d4cfad | (Jonathan Worthington)++ | 2 files
RakuAST handling of untyped array/hash decl