This channel is intended for people just starting with the Raku Programming Language (raku.org). Logs are available at irclogs.raku.org/raku-beginner/live.html
Set by lizmat on 8 June 2022.
rcmlz :-) 08:51
if you replace the for ^($prime**$power) -> $val { in line 18 with (^($prime**$power)).race(batch => 1, degree => Kernel.cpu-cores).map( -> $val { ... perhaps you can get faster ... 10:16
(I assume batch size 1 is appropriate here ...) 10:17
and put in line 41 the colosing ) for the map() 10:19
nemokosch could for not consume the race sequence as well? 10:42
I don't actually know but it would be reasonable and practical 10:43
rcmlz hyper for ^100 -> $a { works as well - and is even simpler - thanks for the hint. 12:49
nemokosch no race? 12:51
you know the difference between hyper and race, right?
lizmat and yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2023/10/16/2023-42-2000/ 13:07
rcmlz race as well
yes
nhail What does batch size mean? I don’t know much about race 14:57
Well I threw in the multithreading and it did speed it up quite a bit 15:47
I then decided to run it looking for solutions mod 17^6 15:49
~24 million
update: it took 30 mins 16:39
rcmlz batch: how many items from the input list get processed at once, when a single item takes long to be processed, batch should be small, when a single item is quickly calculated, batch can be largeer 17:02
nhail So actually batch should be kinda big here 17:13
It's just a polynomial and a %
rcmlz it controls the trade-off between sending data to a thread
nhail Okay, makes sense
I'm working on a new version that uses some more number theoretic machinery instead of a raw brute force search 17:14
rcmlz I suggest to test the impact of various batch sizes. Have a look at rosetta code, I put an example for parallel quick and merge sort there. 17:15
I mean for benchmarking the impact of batch size. 17:16
nhail For 17^5: batch 10 took 49s. batch 100 took 39s. batch 500 took 36s. 17:20
This is on an intel i5-3340M, on my laptop
rcmlz and 2**15? 17:21
it all depends on the total number of items, of course, and also on the number of cores. 17:22
I was under the assumption that number items / cores should be optimal, but my experiments did not support that assumption. 17:23
nhail As the prime power? All sizes finished in under a second 17:24
With 2**15 as the batch size for 17**5, it was 38 seconds 17:26