🦋 Welcome to the IRC channel of the core developers of the Raku Programming Language (raku.org #rakulang). This channel is logged for the purpose of history keeping about its development | evalbot usage: 'm: say 3;' or /msg camelia m: ... | Logs available at irclogs.raku.org/raku-dev/live.html | For MoarVM see #moarvm
Set by lizmat on 8 June 2022.
lizmat . 08:33
lizmat hmmm.. looks like Geth is not getting (some) messages from Github 08:39
lizmat Geth: uptime 09:33
Geth lizmat, 55 seconds
lizmat grrrr.. I really don't want to get into Geth's code atm to figure out why it's not publishing commits 09:34
Geth rakudo/main: 812ed5f698 | (Elizabeth Mattijsen)++ | src/core.c/RakuAST/Fixups.pm6
RakuAST: make-legacy-pod for =table NYI for now

The legacy grammar would do its best to interpret =table specifications as best as possible in NQP. The new grammar just collects the paragraph(s) of the =table specification, leaving it to the legacy pod generation to make sense of. But that code hasn't been written yet, so make it a NYI for now.
09:57
rakudo/main: 812ed5f698 | (Elizabeth Mattijsen)++ | src/core.c/RakuAST/Fixups.pm6
RakuAST: make-legacy-pod for =table NYI for now

The legacy grammar would do its best to interpret =table specifications as best as possible in NQP. The new grammar just collects the paragraph(s) of the =table specification, leaving it to the legacy pod generation to make sense of. But that code hasn't been written yet, so make it a NYI for now.
09:58
lizmat meh
Geth rakudo/main: 2edcaa3512 | (Elizabeth Mattijsen)++ (committed using GitHub Web editor) | 4 files
Add `is-monotonically-increasing` method to Iterators

Returns False by default. If returning True, then .sort() can pass on the underlying iterator unchanged in a new Seq.
This now allows:
   say (1..*).sort; # (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14...
to work, rather than throwing because the underlying iterator is lazy.
09:59
lizmat turns out Geth has a bug that makes it *not* complain if it cannot bind to the designated port 10:00
somehow a version of Geth was still running and accepting data from Github but not reporting them here
killing that and restarting Geth after that, fixed it
Geth roast: 1af7b9e874 | (Elizabeth Mattijsen)++ | S07-iterators/range-iterator.t
Add test for Iterator.is-monotonically-increasing
10:19
lizmat nine moritz am I correct in understanding that self.define_slang('Quote', Raku::QGrammar, Raku::QActions); in src/Raku/Grammar is a runtime operation ? 10:26
nine yes 11:01
lizmat cool 11:03
m: (1..10).lazy.sort # meh 11:38
camelia The iterator of this Seq is already in use/consumed by another Seq (you
might solve this by adding .cache on usages of the Seq, or by assigning
the Seq into an array)
in block <unit> at <tmp> line 1
lizmat it appears that github.com/rakudo/rakudo/commit/2edcaa3512 introduced this, so reverting that for now 11:39
bisectable: old=2023.04 (1..10).lazy.sort # meh
bisectable6 lizmat, Bisecting by output (old=2023.04 new=812ed5f) because on both starting points the exit code is 1
lizmat, bisect log: gist.github.com/7db3ace67d886a653d...9f91ca4935
lizmat, (2023-04-26) github.com/rakudo/rakudo/commit/2e...c69b6edf0d
lizmat yup
nine Honestly, I think (1..*).sort failing is just fine 11:40
lizmat the argument was that when passed as an arg to a sub, you wouldn't have to special case ranges, e.g. when sorting x-positions in a graph 11:41
Geth rakudo/main: e702f0e6a8 | (Elizabeth Mattijsen)++ | 4 files
Revert "Add `is-monotonically-increasing` method to Iterators"

This reverts commit 2edcaa351251ff33ebd48de7cdc18ac69b6edf0d.
It breaks (1..10).lazy.sort in a weird way, further investigation is needed
11:42
lizmat it also broke zef :-( 11:43
Geth roast: c642d66a12 | (Elizabeth Mattijsen)++ | S07-iterators/range-iterator.t
Revert "Add test for Iterator.is-monotonically-increasing"

This reverts commit 1af7b9e8748dd221723a176e972477d9c6359ef9.
11:49
Geth rakudo/main: 1f010bd8ae | (Elizabeth Mattijsen)++ | 3 files
RakuAST: simplify adding pod to compunit

It appears you *can* just initialize an attribute with Array.new. This simplifies adding pod from the grammar significantly: a simple
  $*CU.pod-content.push will do the trick
14:11
[Coke] Any interest in adding JSONC support to one of the JSON parsers? Or would it be better as a separate module instead of an optional flag? 14:20
lizmat what's JSONC ? 14:22
[Coke] changelog.com/news/jsonc-is-a-supe...ments-6LwR 14:23
I have a large corpus of files here that appears to be using this variant. (right now I'm just reporting an error processing the JSON if I hit a comment, but it's everywhere)
lizmat so they're basically C-style comments 14:31
which can start whenever JSON is looking for the next " for the name of a field 14:32
[Coke] Yup, //single or /* multi-line */ 14:34
... Or value of a field, probably?
"stuff": /* nifty */ "value"
lizmat looks like an extra check for / in parse-thing, + a dedicated sub for nomming the comment, should be enough 14:37
[Coke] trying to find something in powershell I can use in the short term, looks like a regexp in raku might be my quickest answer for today. 15:01
lizmat fwiw, I'll have a PR ready in a few mins 15:04
meh, a few more mins :-) 15:20
[Coke] ... oh, nifty. 15:22
lizmat pesky off-by-ones :-) 15:35
github.com/timo/json_fast/pull/85 16:12
[Coke] ^^
[Coke] lizmat: this looks great, thank you - should we hide it behind an option? 16:38
lizmat: this looks great, thank you - should we hide it behind an adverb?
(oops)
maybe from-json($text, :jsonc), something like that?
lizmat [Coke]: I thought about that, but that would either require checking another flag while nomming whitespace 17:16
or some way to set the whitespace nommer, but either of these options feel like they would be more expensice
*ve
nine Could we not have a JSONC grammar that's derived from the JSON grammar and overrides the ws token? 17:18
lizmat jSON::Fast is *not* grammar based 17:19
it's a state machine, basically
lizmat I seem to be missing some clues as to how to run a slang once it has been defined 18:17
token doc-content-toplevel {
<.before ^^ \s* '='> <attachment=.quibble(self.slang_grammar('Doc'))>
}
doesn't cut it
[Coke] .seen timotimo 18:27
tellable6 [Coke], I saw timotimo 2021-05-18T11:13:00Z in #raku: <timotimo> you'll need to sink the object you're currently calling .out on, rather than discarding it
[Coke] anyone seen timo outside of irc? 18:28
lizmat yeah, I've pinged him already
timo is his current nick 18:29
lizmat can you actually have slangs that don't end on a specific closer ? 18:32
[Coke] I'm not sure if you mean something specific by closer, but I don't think I used one in Slang::Date 18:42
lizmat yeah:all of the slangs in the ecosystem, just mix in changed / additional tokens / methods into the MAIN language 18:43
[Coke] .seen timo 18:56
tellable6 [Coke], I saw timo 2023-04-24T15:30:59Z in #moarvm: <timo> maybe here "no pr is bad pr" applies
[Coke] ^_^
vrurg is about to quit using Comma because Intellij is wasting more time than it spares. 19:00
gfldex vrurg: I agree. IDEs save you a lot of time you can spend to micromanage the IDE. 19:13
ugexe IDEs save me so much time writing/refactoring Go its painful to work on the Perl parts 19:14
vrurg Nah, I'm fine with most of it – except for unpredictable, uncurable freezes on re-indexing. 10-15mins trying to get it back to life? Thanks, no.
vrurg wonder if these guys ever heard about threads and background tasks... 19:17
ugexe: BTW, are there a way to limit the number of pre-compiling rakudos? Because it's another side of the coin: when Comma decides to recompile the entire project with deps my 2019 macbook just stops responding. 19:24
ugexe not that i know of, you'd probably have to ask Xliff since i think they changed it to be parallel 19:25
vrurg OK. Somehow it uses exactly as many processes as I have cores, but quick pick into the sources didn't reveal how it does this. 19:27
.ask Xliff you seem to be the one behind parallel pre-compiling. Can you tell if there is a way to limit the maximum number of rakudo processes involved? 19:29
tellable6 vrurg, I'll pass your message to Xliff
nine AFAIK Xliff uses custom scripts to precompile in parallel 20:16