This channel is intended for people just starting with the Raku Programming Language (raku.org). Logs are available at irclogs.raku.org/raku-beginner/live.html Set by lizmat on 8 June 2022. |
|||
03:15
human-blip left
03:16
human-blip joined
04:00
deadmarshal_ left
04:34
deadmarshal_ joined
04:51
human-blip left
04:53
human-blip joined
|
|||
rcmlz | That was a good reading, thanks Liz for the good stuff you create. | 06:14 | |
06:41
jmcgnh left
06:42
jmcgnh joined
|
|||
librasteve | I definitely second that, it makes me think that raku should be renamed tortoise | 08:11 | |
I recall raiph wrote something on how everything in raku is built on something else (that can be used by coders if they are unhappy with the standard abstractions) words like Int.^HOW.^HOW.^HOW come to mind … wonder if lizmat you wrote anything on this? | 08:14 | ||
lizmat | I haven't written anything about that (yet) | 09:26 | |
rcmlz | Is there already a publication emphazising/comparing best practice on using lazy-stuff over non-lazy-things? Maybe this would help people like me leaping forward - like „learning apply in R“ did. | 09:35 | |
Maybe with some latency/throughput comparison between booth aporoaches as „Code-You-Should-Try-Yourself“ | 09:37 | ||
I remember trying the code from here: github.com/Raku/marketing/blob/9ba...053519.pdf | 09:42 | ||
Just to find out that it is not faster as promised on the poster | 09:43 | ||
➜ ~ time raku -e "(^∞).grep(.is-prime)[10000].say" 104743 raku -e "(^∞).grep(.is-prime)[10000].say" 0,28s user 0,04s system 115% cpu 0,274 total ➜ ~ time raku -e "(^∞).hyper.grep(.is-prime)[10000].say" 104743 raku -e "(^∞).hyper.grep(.is-prime)[10000].say" 0,47s user 0,08s system 158% cpu 0,350 total | |||
So for me it would help to have code snippets to execute to understand and test the iterator/laziness-things. | 09:44 | ||
sort of see the benefits in terms of: "seeing is believing" | 09:48 | ||
lizmat | since the time of that example, two things have happened: | 10:09 | |
the determination of .is-prime has become *much* faster, so the 10000th prime number almost takes no effort | 10:10 | ||
m: (^∞).grep(*.is-prime)[500000].say; say now - INIT now | 10:15 | ||
camelia | 7368791 2.796253531 |
||
lizmat | now because of the fact that .is-prime is much faster, the default batch-size for .hyper is waaay too small | 10:16 | |
m: (^∞).hyper.grep(*.is-prime)[500000].say; say now - INIT now | |||
camelia | 7368791 4.734258993 |
||
lizmat | you the overhead of hyper kicking in | ||
m: (^∞).hyper(batch => 2048).grep(*.is-prime)[500000].say; say now - INIT now | |||
camelia | 7368791 0.712647335 |
||
lizmat | with a greater batch size, you *can* see that hyper makes a difference | ||
sadly, the design of .hyper does not allow the batch size to be adapted "on the fly" | 10:18 | ||
that's what I tried to address with raku.land/zef:lizmat/ParaSeq | |||
however, it looks like that module will need some TLC and possible some re-design at some stage as well | 10:30 | ||
rcmlz | Thank you for the explanation. I enjoy using Raku in its current state a lot, so no stress on that. And Raku execution speed was so far no issue for my problems. | 11:57 | |
You mean the "red flags" from the actions? | 11:58 | ||
lizmat | yes... the red flags are because some of the tests fail... and they fail intermittently, pointing at some kind of race condition | 12:01 | |
which I haven't been able to fix yet :-( | |||
also, it looks like the overhead in ParaSeq is too high, need to think of a better way to pass performance info between the dispatcher and the threads | 12:02 | ||
rcmlz | I asked ChatGPT what programming languages actually have "batch size of parallel executed tasks adapted on the fly". It came up with Python, Julia, Scala, Rust, Go, C++ and Raku: 7. Raku Raku supports parallel constructs (start, await) and hyperoperators (>>, <<). While not as sophisticated as dedicated frameworks, custom task schedulers can adapt workloads. Would you like to see a Raku example for such | 12:08 | |
adaptive batching? | |||
The example it gave me was not adjusting the batch size of parallel execution, but of single-thread execution ... | 12:09 | ||
antononcube | Wolfram Language / Mathematica has parallel batch size calculations “a on the fly.” (Of course!) | 12:12 | |
rcmlz | If it is not in ChatGPT - it does not exist! | 12:13 | |
You know, in the old days "you had to be on the first page on google", but that might change ;-) LOL | 12:14 | ||
PS: It probalbly does not realy matter if Python or any of the listed languages really have that capability - e.g. to my understanding removing the GIL in Python just started recently - but "information" from sources like ChatGPT might influence a lot of "less technical" people as falsification is a lot of effort that is probably avoided by those people. | 12:20 | ||
antononcube | Chat-GPT - Shmack-Shmi-PT -- using it is so 2023 or 2024. In 2025 what matters is what DeepSeek says. | 13:07 | |
Which means, I guess, that I have to demonstrate soon how to use DeepSeek with Raku. | 13:08 | ||
rcmlz | LOL | 13:38 | |
17:52
msiism joined
18:03
snonux joined
23:10
msiism left
23:28
DarthGandalf left
23:29
DarthGandalf joined
|