github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
01:18 Kaeipi joined 01:20 Kaeipi left 01:21 Kaeipi joined 01:22 Merfont joined, Kaeipi left, Kaiepi left, Merfont left 01:36 lucasb left 03:26 nativecallable6 left, sourceable6 left, bisectable6 left, greppable6 left, reportable6 left, benchable6 left, releasable6 left, statisfiable6 left, squashable6 left, notable6 left, tellable6 left, coverable6 left, quotable6 left, bloatable6 left, committable6 left, shareable6 left, unicodable6 left, evalable6 left, linkable6 left, reportable6 joined, evalable6 joined, linkable6 joined, coverable6 joined, bisectable6 joined 03:27 nativecallable6 joined, shareable6 joined 03:28 unicodable6 joined, greppable6 joined, squashable6 joined, nwc10 left, committable6 joined, quotable6 joined, benchable6 joined, tellable6 joined, sourceable6 joined, releasable6 joined, notable6 joined 03:29 statisfiable6 joined, bloatable6 joined, nwc10 joined 05:23 Kaiepi joined
nwc10 good *, #moarvm 07:13
08:05 domidumont joined 08:09 brrt joined
brrt \o 08:11
nwc10 o/ 08:12
08:16 MasterDuke joined
nine \o/ 08:21
08:43 Altai-man joined 08:51 leont joined
jnthn morning o/ 09:12
brrt \o
09:14 MasterDuke left
timotimo mohrning 09:15
nwc10 is still working through that long page about the webkit JIT. 09:22
timotimo it's really really long 09:23
nwc10 But so far the "money quote" was that they discovered that the cost of OSR from a failed specialisation was sufficiently high that they should actually only do them if the stats collector estimates that the probabily of success is 1.0 09:24
timotimo i already found a few bits interesting: making diamond optimization for when feedback suggests it's worth it, duplicating code paths between some branches if they are very small, ... and i forgot what else i was going to mention 09:25
nwc10 I don't know enough of the details of spesh, JITs etc to know that that is itneresting.
the "p(1) or don't bother" was my level :-) 09:26
timotimo ah, when feedback suggests a loop hasn't finished running, that's pretty much the only situation at which something will be "optimized again" or something?
i stumbled upon that a few times in the past
having a frame with two long loops, so that OSR kicks in on the first loop, and the second loop has no information what-so-ever, so doesn't get optimized much
there was also a point about not being able to know anything about any registers when entering the middle of some code 09:28
and yeah, i feel that 09:33
we currently give the 0th block a path to all the osr points or something like that?
so in many spots we have blocks where a PHI gets created with version 0 of some register, about which nothing is known 09:34
which is essentially how we express this problem in our SSA structure 09:35
brrt nwc10: which page is that 09:42
I think I have seen a page like that ages ago but I'm not sure it's the same one
nwc10 somewhere in webkit.org/blog/10308/speculation-...criptcore/
brrt thank you
nwc10 brrt: MasterDuke told #moarvm about it yesterday 09:43
brrt well, I'm sure I haven't seen that one before
also, .... the cost of OSR depends a lot on the runtime architecture
nwc10 don't forget to eat lunch - it's quite long.
brrt heh
good point. I was about to go get lunch
09:55 brrt left 09:58 MasterDuke joined
MasterDuke yeah, it was a thought-provoking read. it (and coincidentally seeing all those deopts when investigating that zen vs whatever slice issue) is what prompted my question about if we ever undo speshializations 10:00
jnthn Well, deopt kinda is the undoing; the thing we don't do is use the number of deopts to mark a specialization as "don't use this again" 10:03
At the moment we'd only really be able to mark it up such way too, since short of a GC-time pass over all callstacks, including those in continuations, there's no way of knowing if a specialization's generated code and metadata can be freed up. 10:04
Making a specialization a collectable object would be a way to tackle that
MasterDuke how much work/how difficult would that be? 10:05
timotimo instrumentation has a similar fun
jnthn MasterDuke: Which bit? :) 10:16
MasterDuke timotimo: any inspired thoughts about whatever slice? or even regular thoughts?
jnthn The refactor to hold specializations in a GC-able is...probably not too bad.
MasterDuke and that's a likely requirement to marking them as not to use? 10:17
timotimo i'm not entirely sure which part wants rewritten; probably the part that thinks it has to consecutively AT-POS the range in order to get consecutive elements
jnthn MasterDuke: Depends. If you want to re-specialize based on new stats, yes, because otherwise you end up with a memory leak of the old specializations. If you just want to mark them as "do not use" but not produce replacements, it's OK 10:18
MasterDuke well, re-specialize seems like a good end goal 10:19
so not sure how much gain it would be to do the work of just marking them not to use 10:20
before making them collectable
timotimo: hmm, i hadn't really looked that far back in the callstack 10:21
jnthn MasterDuke: Also, from a "need to learn the data structures involved" point of view, doing the refactor to collectable first is probably better 10:23
It's kinda being thrown in the middle end rather than the deep end :)
MasterDuke well, if nobody beats me to it (and please please don't hold back if anybody else feels inspired) i may give that a try 10:25
timotimo this will want to be similar to a weakhash, right? 10:26
jnthn I don't think it need to be weak at all
MasterDuke but i suspect i'll need a bit of a pointer as to where/how to start (...and jnthn et al. all of a sudden stop logging into irc)
timotimo something will have to hold on to all specializations so we can dispatch to them, right? 10:27
so if a whole code object dies, the specializations will die with it
jnthn Yes, that's already the case today
The thing we want to achieve here is making individual specializations deathable while the code object remains alive.
timotimo but i think what we were looking for is tossing out a specialization that only ever results in deopt immediately, or something?
jnthn At the moment there's an MVMSpeshCandidate struct 10:28
We'd need to make that an MVMCollectable, probably via adding a REPR
And then going through every place that holds one (e.g. the spesh_cand pointer on an MVMFrame) and marking it
And then moving the freeing logic into that REPR's gc_free
The idea here being that we can remove a spesh candidate from the list of those on a code object 10:29
But have it not actually die until every piece of code that is running that specialization - even if suspended in a continuation - has gone away
timotimo ah
that's so obvious, that i should have thought of it
jnthn Anyway, +1 to doing this as a pre-req for doing the trickier bit of throwing them out if they deopt too much 10:30
I suspect the tricky part there is partly the threading/communication
(We need the decision making to be done by the spesh thread)
But maybe the way to deal with that is just to write spesh log entries about deopts
timotimo right, also tossing out the candidates from the spesh guard tree? 10:31
MasterDuke sounds like an even better project for any wild timotimos lurking around than for /me
jnthn timotimo: Well, you'd probably toss them from the candidate list and then rebuild the tree
We never edit the tree any more, always rebuild it
timotimo true 10:32
jnthn Once I added derived specializations, the code for editing it became so complex I couldn't reason about it any more.
timotimo that turned out messy :)
yeah
jnthn I didn't ever get to a working version; I reached the point where I realized that even if I got it to work then a) it probably would be somehow wrong, and b) I'd probably not be smart enough to debug it. 10:33
MasterDuke: going on stuff you've done before, I think the refactor to collectable bit would be within reach (and a useful learning experience, if you want to get more into working on spesh stuff) 10:35
MasterDuke yeah, i'll give it a try if nobody does end up beating me to it. i just don't have quite as much free time as of recently (and what i do have is more broken up), so serious about don't wait on me 10:38
jnthn *nod* 10:41
I think my hands are full for a while yet with dispatcher and rakuast :)
10:43 brrt joined
brrt ok lunch was had 10:43
11:46 sena_kun joined 11:48 Altai-man left 12:21 AlexDaniel` left 12:29 AlexDaniel` joined 13:18 zakharyas joined 14:05 vrurg_ is now known as vrurg 14:14 zakharyas left 15:06 camelia left 15:07 camelia joined
nine m: 'say "back on openSUSE 15.2"' 15:09
camelia WARNINGS for <tmp>:
Useless use of constant string "say \"back on openSUSE 15.2\"" in sink context (line 1)
nine m: say "back on openSUSE 15.2"
camelia back on openSUSE 15.2
brrt huh 15:32
nine MasterDuke: just go for it! It's an area you're obviously interested in so you will have fun and learn a ton. In my book there shouldn't be any other factors to consider 15:36
15:42 MasterDuke left 15:45 Altai-man joined 15:48 sena_kun left 16:08 MasterDuke joined
[Coke] thought brrt said lunch was *bad* was sad for a moment 16:52
brrt haha 16:53
no
lunch was not bad
[Coke] one big pandemic change for me, eat at home a lot more these days. Not sure if bad or good. 17:01
17:09 brrt left
nine [Coke]: depends on how well you cook I guess ;) 17:18
17:54 Kaiepi left 17:55 Kaiepi joined 18:34 domidumont left
nwc10 .seen froggs 19:01
tellable6 nwc10, I saw froggs 2019-09-15T13:29:14Z in #perl6: <FROGGS> uhhh, that sounds awesome
nwc10 is curious about commit 0ae47f3fbc55a307c3ebf323d4252399744ee745
relates to s390 19:02
So, if you apply the same hack as s390 has, you can build on sparc32 and sparc64 19:12
oh, and you also need to hack libtomath for sparc32, because you need to end up in the `define MP_32BIT` clause 19:13
19:46 sena_kun joined 19:48 Altai-man left 23:31 sena_kun left 23:51 leont left