| 23 Dec 2025 | |||
| ab5tract | .vushu: next step may just be to add texture rotation based on the wind :) | 11:40 | |
| disbot7 | <.vushu> @ab5tract maybe you can finish your up your project 😄 | 11:49 | |
| <.vushu> @grondilu thanks for the fix --force-install isn't necessary I just didn't bother uninstalling before installing. | 11:50 | ||
| Voldenet | simon_sibl: raku code allocates gigantic hash for every single element, so it's slow and memory hungry as expected | 12:20 | |
| I mean, the hash is probably bigger than things it contains | 12:21 | ||
| `priority_stats[priority][0]` vs `%result{$j<priority>}` | 12:25 | ||
| on top of it, priority_stats uses int-based dictionary, hashes in raku allow only strings | |||
| that python is just a lot stricter about performance, raku code is stricter about readability | 12:29 | ||
| `priority_stats[priority][1] += 1` yeah totally :) | |||
| `%result{$j<priority>} += $wait;` uses <op>= metaop which adds some overhead too | 12:30 | ||
| also, I'm not sure why none of these code snippets simply use classes for jobs if performance is important | 12:33 | ||
| classes are always huge win both in readability and performance, because they give code optimization opportunities | 12:35 | ||
| btw, I've looked at benchmark and it's ridiculous – you're comparing runtime of processes that take 5ms | 12:53 | ||
| ab5tract | Great points, but I want to clarify that object hashes do exist | ||
| Voldenet | using that methodology can lead to enlightening thought that js is faster than rust :> | ||
| since compilation would take time | |||
| ab5tract: by "object hashes" you mean hashes that contain object's data or hashes that are shaped like an array by some optimization? | 12:58 | ||
| ab5tract | m: my %h is Hash[Int,Int]; dd %h{121} = 42 | 13:00 | |
| camelia | Int = 42 | ||
| ab5tract | m: my %h is Hash[Int,Int]; dd %h{“121”} = 42 | 13:01 | |
| camelia | Type check failed in binding to parameter 'key'; expected Int but got Str ("121") in block <unit> at <tmp> line 1 |
||
| disbot7 | <antononcube> @simon_sibl Well, thanks for putting the benchmark together. I hear that Raku can do it (or some of it) faster, which — hopefully — means that someone can re-write it. | ||
| <antononcube> @librasteve Is there a performance page in raku.org ? Some of the points @Voldenet made above might be good to be there… | 13:04 | ||
| Voldenet | ab5tract: is that implemented on nqp level? Because if not it wouldn't beat python's dictobject (which uses C) | 13:09 | |
| in nqp I can only find string-based hash | |||
| lizmat | there is only a string based hash | 13:14 | |
| all other hash types, such as object hashes, under the hood revert to a string | 13:15 | ||
| a hash with keys limited to Int, is basically an object hash with a constraint on the key | |||
| so slower | 13:16 | ||
| Voldenet | though in theory it is possible to implement hashtable in pure raku | 13:20 | |
| and in theory it doesn't have to be slower than C | 13:21 | ||
| lizmat | my reference to "so slower" is about the current implementation :-) | 13:24 | |
| now if objects of value types would share their unique values, we could use the object id as a key in the hash | 13:25 | ||
| but unfortunately, they don't | |||
| Voldenet | you can expect boring equality contract to be implemented | 13:26 | |
| hashCode + equals | |||
| disbot7 | <librasteve> @antononcube there is no performance page on raku.org --- the goal of raku.org is to convert new potential raku users, so doesn't really belong there imo --- I would support a Performance page in the docs (docs.raku.org/introduction) with an emphasis on how to avoid the pitfalls listed above. | 13:27 | |
| Voldenet | I remember TypedHash using .WHICH which usually requires some serialized string, I'm not sure if that's still valid | ||
| disbot7 | <librasteve> oh - there is one already => docs.raku.org/language/performance..._your_code | 13:28 | |
| <librasteve> so - suggest we gather anything missing and apply as a PR to that docs page | 13:29 | ||
| <librasteve> @simon_sibl - note the performance page in the docs ^^ | |||
| lizmat | Voldenet: that's how object hashes work | ||
| disbot7 | <librasteve> may help to improve, I have used the profiler from time to time | ||
| Voldenet | …wait, so does that mean that typed int hash is slower than regular hash? | 13:30 | |
| m: my %h is Hash[Int,Int]; for ^100 { %h{$_} = $_ + 2; }; say now - BEGIN now | |||
| camelia | 0.0347615 | ||
| Voldenet | m: my %h is Hash; for ^100 { %h{$_} = $_ + 2; }; say now - BEGIN now | 13:31 | |
| camelia | 0.015954565 | ||
| lizmat | yup | ||
| I once wrote raku.land/zef:lizmat/Hash::int | |||
| Voldenet | yeah, I remember, it was wildly faster because of nqp ops | ||
| lizmat | but I'm afraid the performance advantage has largely evaporated after the new dispatch | 13:32 | |
| Voldenet | I'm starting to wonder if `role Equatable` and regular list-based hashtable wouldn't be fast enough to beat string-based Hash for ints | 13:35 | |