github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
07:38 domidumont joined
nwc10 good *, #moarvm 07:57
08:49 domidumont left 08:53 sena_kun left 09:00 frost-lab joined 09:01 zakharyas joined 09:02 zakharyas left, zakharyas joined 09:03 sena_kun joined
MasterDuke hey hey 09:34
jnthn: would it make sense to add the capability to "reset" spesh? or would the incoming ability to remove specific optimizations be sufficient? 09:36
i was thinking of this because there were a couple cases recently where people were benchmarking by running several similar pieces of code in a single script, but the later pieces were "artificially" slowed down because spesh had optimized for the first piece 09:39
i.e., the pieces when run by themselves had much more similar timings than when timed as part of a single script 09:41
09:56 cog left 09:57 cog joined 10:00 lizmat joined
jnthn MasterDuke: No, I think they should just run those in separate processes, pretty much. Spesh is only one source of noise. 10:05
Of course dropping bad choice specializations is also a good thing, and may help such situations too 10:06
MasterDuke k 10:13
10:41 Kaiepi left 10:51 squashable6 left 10:52 squashable6 joined, squashable6 left 10:53 squashable6 joined 10:55 domidumont joined 11:25 Kaiepi joined 11:33 avarab left 11:34 avar joined 11:35 Kaiepi left, frost-lab left 11:37 Kaiepi joined 11:52 zakharyas left 12:55 zakharyas joined
MasterDuke i've been thinking about github.com/rakudo/rakudo/issues/36...-613561305 and i wonder if it'd make sense to just get rid of nqp::time_n and change nqp::time_i to return nanoseconds 14:43
greppable6: nqp::time_n 14:52
greppable6 MasterDuke, 33 lines, 5 modules: gist.github.com/6738789120beeaedfe...4873cc4d04
MasterDuke those shouldn't be terribly difficult to create PRs for. and if doing so means now() is much faster, maybe they can be converted to use it instead of the nqp op 14:54
is there a strong reason i don't know about for having time available as num/floating point? 14:58
moon-child not afaik. If you need high precision, way to go is fixedpoint + github.com/facebookarchive/Flicks 14:59
MasterDuke java and javascript only gives millis, but they can just multiply by 1000000 instead of the current behavior of dividing by 1000.0 15:08
:q
wrong window 15:09
lizmat MasterDuke++ 15:16
re nqp::time_i and nqp::time_n
but perhaps to remain a little compatible, add a new nqp::time_ii ?
to at least make the transition easier ? 15:17
inside core and outside of it ?
MasterDuke greppable6: nqp::time_i 15:26
greppable6 MasterDuke, Found nothing!
MasterDuke there really aren't that many uses of time_(i|n) in the core. ~30 in nqp and ~70 in rakudo 15:28
15:34 rba left, rba joined, zakharyas left 15:35 bonsaikitten left 15:36 xiaomiao joined 15:38 nine left, avar left, nine joined, avar joined, avar left, avar joined 15:39 lizmat left
MasterDuke hm, if there's only one variant of an op it doesn't have an '_(i|n|s|o)', right? so i'd be removing `time_n` and changing `time_i` to just `time` 16:32
16:37 zakharyas joined 16:42 Kaiepi left 16:43 Kaiepi joined
jnthn Yes, that's follow the convention 16:57
*followed 16:58
MasterDuke moarvm changes only took a couple min, but time for dinner, etc. hopefully will see how easy nqp/rakudo changes are this evening 17:02
17:19 MasterDuke left
nine jnthn: why is BB 0 not a successor of blocks that are covered by an exception handler? 17:20
jnthn If it were, we'd not have a unique dominator. iirc, BB0 is always empty and largely there for this purpose. 17:32
(To be clear, not block that dominates all other blocks of the graph) 17:34
*no
17:41 domidumont left
nine What is the mechanism then that's supposed to tell the phi builder to consider writes that happened before a handler area? I.e. in `set r1(1), 1; [FH Start] die; set r1(2), 2; [FH End] takehandlerresult r2(1); phi r1(3) r1(1) r1(2) return_i r1(3);` we definitely need that phi, since it's not guaranteed that the set r1(2), 2 happens 17:42
17:43 patrickb joined
nine All other types of branching are handled via successors respectively the dominance frontier set 17:43
jnthn Successor edges 17:44
Maybe there's an edge missing? 17:45
nine Which one would that be? gist.github.com/niner/0fa1f36ea49a...8b9e5f722c
If it's successors, then I'd say that every BB that contains a throwish op such as invoke must have an edge either to BB 0 or at least to the BB containing the takehandlerresult op 17:47
(if covered by a FH)
jnthn Potentially to the handler part of all handlers that we are in 17:49
Which is what I've done in another project recently, anyway
That'd in principle mean the edges for BB 0 to the handlers could go away 17:50
*from
Well, modulo inlining fun
Yeah, looks like that (extra successor edges) is exactly what's needed. I think they may well already be added for control exception handlers, though...hmm 17:52
See github.com/MoarVM/MoarVM/blob/mast...aph.c#L181 17:53
nine Ah, yes, now that I see that I did wonder why control exception handlers are handled specially there but not others 17:54
jnthn And then github.com/MoarVM/MoarVM/blob/mast...aph.c#L684
nine There is extra code for making that differentiation which I assumed was there for a very good reason
jnthn But what's a bit curious is that a return exception handler is a control exception handler 17:55
So it should already be getting these edges
nine The method in question is github.com/rakudo/rakudo/blob/mast....nqp#L4842 17:56
I.e. it's not about the return control exception but an adhoc exception thrown by find_symbol 17:57
jnthn Ohh...this is a catch handler
Hmmm
I bet Rakudo's code-gen always introduces sufficiently many blocks to never really trip this (and/or the presence of the CATCH succeed handlers makes up for it somehow) 17:58
Thus why we've got away with it for so long
But yeah, I'd solve this by introducing the edges for all kinds of exception 17:59
nine I do have the suspicion that the flapper in t/spec/S17-supply/return-in-tap.t is caused by this, too, but have not yet been able to confirm it
Ok, I can work with this. Thanks a lot! :)
jnthn Welcome 18:00
nine++ # no fear of spesh :)
18:22 lizmat joined 18:44 vrurg left 19:02 vrurg joined
patrickb o/ 19:04
19:06 vrurg left
nwc10 \o 19:07
patrickb nwc10: I'd value your opinion on github.com/MoarVM/MoarVM/pull/1385 19:08
nwc10 right now is act.yapc.eu/gpw2021/event/2189 so I can't really look 19:10
19:19 zakharyas left
patrickb Mo hurry. 19:28
*No
20:36 vrurg joined 21:06 vrurg left, vrurg joined 21:09 patrickb left 21:10 zakharyas joined 21:12 MasterDuke joined
MasterDuke whoops, removing them means nqp won't build. have to add time, rebootstrap, then remove time_(i|n) 21:16
nine Incredible! A week of hardcore debugging. A day to formulate the right questions to jnthn. And the fix turns out to be deleting just 4 lines of code. 21:27
MasterDuke did you beat my not-very-serious 15m estimate? 21:28
nine Looks like :D 21:29
japhb Solution: remove the wrong bits.
21:29 nebuchadnezzar left 21:30 nebuchadnezzar joined
nwc10 Technically this is off topic on every channel I might (or might not) be on, but lists.tetaneutral.net/pipermail/cf...00704.html -- he machine is quite fast… the compile time and run time is about 30% faster than a recent MacBook Pro 16 (2.4 GHz 8-Core Intel Core i9 with 32 GB 2667 MHz DDR4). 21:30
this is "recent MacBook Pro" vs M1 mac mini
sort-of-relevance is "likely x86_64 is no longer the only CPU in town" 21:31
and, as I understand it, the M1 is the "low spec" chip. I'm wondering what comes next. 21:32
nine At least I get through a full rakudo build with MasterDuke's patch for logging undefined results of decont. It still errors in a couple of tests. But these may be due to an unrelated issue or the 4 lines simply aren't enough after all. Alas, it's very encouraging progress :) 21:33
nwc10: well the ceiling for that chip will be how much memory they can stack on top of it. I'm pretty sure the amazing performance is mostly a result of blazingly fast (and low latency) access to the stacked memory. 21:35
MasterDuke nine++ 21:39
nwc10 not thought of that (and ahd assumed that there were some other reasons it was fast(er) - we shall see. But do remember that Rasbperry Pi 8GB only came once a chip existed. That thing is 16GB
so is 16BG around the current limit of "chips you can stack?"
I think that the largest machine I currently have access to is 256GB. That's a bit different. 21:41
21:45 zakharyas left
MasterDuke i used to have access to machines with 768gb. i mostly stopped caring how much memory it took to solve a problem 21:46
21:46 vrurg left
nwc10 I think ex-work ex-main-DB server had 768gb. (But there was one of it). I don't know what the replacement had. I don't know if I ever logged into it 21:48
On the "old" architecture the 768gb machine was also the "god box"
the new architecture was more sane.
eg - you do not want you NFS server to also be the DNS server, because if it needs to fsck its disks it will be single user for a long time, and ... 21:49
but you can shorten that to
"you do not want NFS"
japhb At Google we found interestingly enough that the limiting factor was often not the actual memory of the machine, but how many containers you could pack into it while still monitoring and managing them all. People used to working on smaller servers would often try to be super memory-efficient with their containers, and you'd find like half the RAM of the physical machine was essentially idle because all the 21:52
containers used up all the CPU and there wasn't enough cacheable stuff to productively use the remaining RAM.
MasterDuke i've never been responsible for setting up an alternative. and i don't ever want to be. but i've never had a good experience with nfs 21:53
those machines were overkill for probably about 98% of their use. but i did push their limits a couple times 21:59
timotimo: is there anyway for the appimages to run files? or are they completely limited to -e? 22:05
22:06 vrurg joined
MasterDuke oh, giving them as an absolute path works. but unfortunately not if they then try to open files... 22:07
22:11 vrurg left 22:30 dogbert11 joined 22:31 dogbert12 joined, dogbert12 left 22:32 dogbert12 joined
MasterDuke Stage start      : 68450.000 (i suspect i've forgotten to scale a value somewhere...) 22:32
22:34 dogbert17 left 22:35 dogbert11 left