🦋 Welcome to the MAIN() IRC channel of the Raku Programming Language (raku.org). Log available at irclogs.raku.org/raku/live.html . If you're a beginner, you can also check out the #raku-beginner channel!
Set by lizmat on 6 September 2022.
roguerakudev is there any way to look up an enum value based on its ordinal? Everything I've tried just gets the key as a string, but I want an instance of the enum type 00:28
the only thing I've found that works is calling pick with the total number of values, but that's really asinine 00:33
leont m: say Order.^enum_from_value(1) 00:42
camelia More
Voldenet there's even easier way 01:24
m: say Order(1)
camelia More
Voldenet but it's slightly different, because it uses a coercer that internally uses enum_from_value 01:28
Voldenet related code github.com/rakudo/rakudo/blob/3c9f...akumod#L40 01:34
Xliff #raku 12:08
\o
Meh.
Does anyone live in Dayon, OH?
I'd like to be there for the Total Eclipse 12:09
antononcube Well, I live close Daytona, FL, but that is not close (geographically.) 14:18
tbrowder__ Voldenet: my Date::Event class has what i think is an easy and practical way to handle enums 15:11
patrickb I'm searching for some reference of how long typical tasks in software take / how costly they are (I know that I'm pretty bad at estimating this). I.e. a branch, a call, a network request, a lock, a sys call, different memory accesses, ... Any recommendations of where to look? 15:17
japhb patrickb: Do you mean at the hardware level or in a high level language? 15:44
patrickb both. I guess it's important to develop a feel for how expensive things are to make the right tradeoffs. Current case I'm pondering: Is using a mutex in the IO::Socket::SSL hot path (a `protect` on every packet received) a show stopper or are the lock costs in a different ballpark than stuff arriving over network? 15:48
japhb Oh, I definitely think that's a good idea. 15:49
Xliff patrickb; Good questions! I could use something like that too.
japhb At the hardware level, since processors have been doing branch prediction for decades now, it's primarily *mispredicted* branches and calls that have a cost, because they flush the pipelines and may require loading uncached code. Correctly predicted branches have only normal instruction costs. 15:51
japhb Memory levels tend to be somewhere around: Registers: "instant", L1 cache: 2-3 cycles, additional cache levels approximately double the previous level, main mem on same NUMA node: ~100 cycles, additional time + possibly bus/switch contention going to another NUMA node. 15:54
Network access is .1-1 ms for loopback, but considerably slower for anything further. Speed of light is ~30 cm/nanosecond (or around 1 foot / nanonsecond, jokingly called a "phoot": 'photon-foot'), and through fiber or copper signals travel at about 2/3 that speed. That's your limit for performance, but a lot of time gets lost in buffering, wire speed queuing, imperfect cable routing, etc. Assume 10-100 ms 15:57
for most people pinging through a wired network, considerably more for cellular and other wireless networks.
Locks and syscalls are *very* dependent on OS and whether the locks are user-space or kernel-space or some hybrid. 15:58
For locks, the biggest slowdown is contention. The less contention on a lock, the faster it goes. Highly contended locks soak your code in molasses. Lock contention grows with both number of threads working in the same code and the length of time the "inside" of the lock protection takes. Reducing either of those helps a lot. 16:00
Syscall time is dependent on processor type and generation, as well as how many "rings" (layers of CPU security) you go through. For example, user space to hypervisor takes longer than user space to current OS. Some processors have special functionality to speed up VM syscalls because that's what primarily runs on them (Xeons are the Intel version, for instance). 16:02
Uncontended locks and "fast" syscalls are a few 10s to 100s of clock cycles. 16:03
Again, this is all at the lowest levels. But does that give you useful starting info? 16:04
patrickb That's a very good start, thanks! In the current case, going with 300 cycles for a rather uncontended lock and having a 2GHZ CPU that makes 0.00015 ms for lock access. Compared to .1-1ms for loopback, yeah, that's a different ballpark. This is exactly the kind of reference I was looking for. \o/ 16:13
lizmat and yet another Rakudo Weekly News hits the Net: rakudoweekly.blog/2024/02/26/2024-...s-zombies/ 19:12
antononcube 🌻 🫛 19:27
lizmat :-) 19:28
Voldenet tbrowder__: there's no need to do `%m = EType.enums` and then `%m{$v}`, `EType::{$v}` could be used 20:41
somewhat confusing example:
m: enum N (11..31); my IntStr $n = <12>; N::{$n}.say; N($n).say
camelia 12
23
gfldex m: enum Foo (1..42); say Foo::<42>.WHO.WHAT; 22:13
camelia (Stash)
Xliff \o 23:59