9 Oct 2024 | |||
lizmat | <1K open issues :-) | 17:29 | |
[Coke] | \o/ | 17:32 | |
vrurg | lizmat: considering the `handles` case, you'd need something extra to handle the `handles => *`. Though it could be another entry in the hash. | 19:11 | |
lizmat: Another point: conflicts. Especially dynamically added methods. The current semantics would be hard to follow then. | 19:12 | ||
Though I'm not happy with the semantics either way. Using `handles` as a fallback resolution is more appealing to me. In which case `*` becomes the fallback of fallback. :) | 19:14 | ||
MasterDuke | lizmat: you're running the whateverables now right? i just killed bisectable somehow | 22:00 | |
huh. may have just killed it again. not sure why this particular query is causing it so much trouble though | 22:06 | ||
lizmat | MasterDuke: AlexDaniel is still running them | 22:32 | |
I just run Geth and RakuIRCLogger | |||
10 Oct 2024 | |||
ab5tract | lizmat awesome! :D | 06:42 | |
^^ ( <1K open issues) | 06:43 | ||
Geth | roast: 994d0be545 | (Elizabeth Mattijsen)++ | S05-grammar/inheritance.t Add test for #2851 |
09:50 | |
doomlord | Hello, what would it take to change the MultiDimArray repr to use an array to implement fixed dimensions as strides and expose this functionality to raku-code? | 11:15 | |
I think that’s how people do this nowadays | 11:16 | ||
Changing the C side is trivial, how do you expose that to Raku? | 11:17 | ||
lizmat | through NQP or through dispatch calls: disclaimer: never done that myself | 11:19 | |
question probably better asked on #moarvm | |||
nine patrickb ab5tract suggestions? | |||
doomlord | lizmat: thanks, was it considered to implement that in pure raku on top of the Buf representation? That would be an alternative solution (dunno about performances) | 11:22 | |
lizmat | I don't think jnthn ever considered that... but I have at some point :-) | 11:27 | |
if only for the fact that it would also run on the JVM | |||
doomlord | Indeed | ||
lizmat | also: I think with new-disp, I think a pure Raku / nqp implementation would only be 1 magnitude slower than a pure C one | 11:28 | |
doomlord | Would be nice to hear some arguments against that | 11:29 | |
A part this performance aspect | |||
Geth | rakudo/main: a9c9e39850 | (Elizabeth Mattijsen)++ | src/core.c/Complex.rakumod Make prefix - not negate 0 as real value in i Fixes #2986 |
11:43 | |
roast: f74eff7a6a | (Elizabeth Mattijsen)++ | S32-num/complex.t Add tests for #2986 |
11:44 | ||
rakudo/main: 1cedcee68f | (Elizabeth Mattijsen)++ | t/12-rakuast/xx-fixed-in-rakuast.rakutest Add test for #2996 |
11:49 | ||
doomlord | lizmat: I think the future proof way would be to give the possibility for Bufs to move between devices. As it can be done in torch | 11:54 | |
lizmat | "Bufs to move between devices" what do you mean by that? | 11:55 | |
doomlord | Practice shows that the overhead is easily overcome by running shaders on the gpu | ||
I mean that now there is the assumption that Bufs are on the main memory | 11:56 | ||
lizmat | ah, ok, gotcha | ||
doomlord | This is not the case in modern numerical libraries , a Buf can live on the gpu | ||
lizmat | I suggest making a problem solving issue for this, so more people can take part in this discussion | 11:58 | |
doomlord | Indeed | ||
lizmat | afk for quite a few hours& | ||
nine | I think a more productive way is to just dig into it and provide an example implementation. You can easily base your fixed dimensional array on a 1 dimensional native array or even a Buf (though I don't think Buf provides any advantages here). | 12:29 | |
Doing it in pure Raku lets you focus on the API and algorithms first. When you have figured those out, you can profile and check where the performance bottleneck actually is and what changes to the VM would improve that state. Maybe it requires the VM knowing more about the storage format but maybe it's just better optimization of natively typed code in general. | 12:30 | ||
If you want to be able to store it in GPU memory, then maybe basing on Buf is indeed helpful. | 12:32 | ||
FWIW I have long considered heterogenous architectures (i.e. GPU computing) to be a bit of an archilles heel or architectural blind spot of Raku. But maybe it's not all that bad. I guess we'll only know once someone does actually try to use GPUs from Raku for some heavy computation. | 12:34 | ||
After all Python of all languages is used heavily in this area. Though it's really just used to set up the computation pipeline rather than computation itself. | 12:36 | ||
timo | python's "buffer" API is something i've wanted for raku in the past | 13:04 |