github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
05:13 angelds joined 06:14 notable6 left, unicodable6 left, coverable6 left, statisfiable6 left, quotable6 left, benchable6 left, tellable6 left, nativecallable6 left, evalable6 left, bisectable6 left, greppable6 left, releasable6 left, linkable6 left, sourceable6 left, squashable6 left, bloatable6 left, shareable6 left, committable6 left, statisfiable6 joined, notable6 joined, bloatable6 joined 06:15 tellable6 joined, linkable6 joined, nativecallable6 joined, releasable6 joined, quotable6 joined, shareable6 joined, unicodable6 joined 06:16 bisectable6 joined, sourceable6 joined, squashable6 joined, coverable6 joined, evalable6 joined 06:17 committable6 joined, benchable6 joined, greppable6 joined 06:31 angelds left 10:02 MasterDuke joined
MasterDuke this is interesting, it's non-deterministic. sometimes the example (when on my branch) has 43k deopts, sometimes 688k. however, with MVM_SPESH_BLOCKING=1 it's always 688k 10:05
nwc10 hash randomisation can affect spesh 10:06
I *sort* of figured this out; jnthn said it directly
MasterDuke well, not just hash randomization, right? aiui, MVM_SPESH_BLOCKING=1 does more than just disable hash randomization 10:08
nwc10 yes. Not sure why it makes your numbers consistent 10:09
and it doesn't disable hash randomisation
you have to compile the C code conditionally to do tht
but hash randomisation is the big reason why the spesh behaviour can vary with a default build and no environment variables 10:10
MasterDuke hm, on master the number varies by a tiny amount, but it's always right around 688k (with or without MVM_SPESH_BLOCKING=1) 10:11
i wasn't even sure at first how there could be so many deopts on my branch, but i guess they're happening so fast that the spesh plan to remove the optimization doesn't have time to run. it would be nice if MVM_SPESH_BLOCKING=1 triggered the remove case, so i could easily get some timing numbers 10:16
how does a plan get triggered? 10:17
nine MasterDuke: MVM_SPESH_BLOCKING=1 means that executing threads are stopped while spesh creates a candidate. Without it the main program continues running while spesh is busy. I.e. with blocking enabled, speshed code will get more exercise and you will see more consistent results 10:19
MasterDuke but then i would expect to always see the fewer deopts because my removal code got hit? 10:21
or is creating a candidate the only thing that stops executing threads when MVM_SPESH_BLOCKING=1? and i need to add some `if (MVM_SPESH_BLOCKING=1) { stop_executing_thread() }` before/in/around my removal code? 10:24
nine don't know 10:25
MVM_SPESH_BLOCKING is implemented via a mutex on the spesh log 10:26
10:36 camelia joined
nine m: say "back from my new home!" 10:36
camelia back from my new home! 10:37
10:56 camelia left 11:02 nine_ joined, nine_ left
nine Hm...getting camelia to connect via IPv6 is a lot harder than expected though 11:49
12:18 camelia joined 14:05 zakharyas joined 14:09 zakharyas left 16:38 zakharyas joined, domidumont left 16:52 domidumont joined
MasterDuke heh. so i see a `Observed type specialization of 'reify-at-least' <...> It was planned for the type tuple: Type 0: List::Reifier (Conc)`. then `Deopt specialization of 'reify-at-least' <...> Removing a spesh candidate because it was deopted too many times (365)`. and then even later... `Observed type specialization of 'reify-at-least' <...> It was 17:03
planned for the type tuple: Type 0: List::Reifier (Conc)`
but i only ever see one type tuple. maybe it's oscillating between two type tuples, so initial speshialization, then a new tuple and removal, then back to the original? but all the type tuples seen in the log are the same? 17:06
17:06 brrt joined
brrt \o 17:07
nwc10 o/
17:24 brrt left 17:53 zakharyas left 18:10 domidumont left 18:24 domidumont joined 18:32 domidumont left 19:05 zakharyas joined 22:17 zakharyas left