github.com/moarvm/moarvm | IRC logs at colabti.org/irclogger/irclogger_logs/moarvm
Set by AlexDaniel on 12 June 2018.
00:13 sxmx joined 01:39 Altai-man_ left 01:43 sena_kun joined 03:31 harrow left 03:43 harrow joined 04:54 vrurg left 04:55 vrurg joined 06:35 frost-lab joined
nwc10 good *, #moarvm 06:46
07:30 ggoebel left 07:31 ggoebel joined 07:48 domidumont joined 08:11 domidumont left 08:59 patrickb joined
patrickb o/ 08:59
MasterDuke m: my $a; my $s = now; $a = do given "hi" { when 3 { "three" }; when 5 { "five" }; when "hi" { "hello" }; when * > 40 { "> 40" }; default { "default" } } for ^40_000; say now - $s; say $a   # nine this was the code 09:01
camelia 3.7216878
hello
MasterDuke wow. on 2020-11-12 that took 7.7s for camelia 09:02
but a profile shows it's still dominated by github.com/rakudo/rakudo/blob/mast...ce.pm6#L98 09:04
42% of the time in GC (369 GC runs)
nine MasterDuke: that camelia speedup is due to it running on a AMD Ryzen 7 3700X now 09:13
MasterDuke oh, ha! no wonder it matched my local time much more closely than i remember it doing before 09:14
nine MasterDuke: the good news is: my latest version for correct backtraces is about 25 % faster in that benchmark :) 09:15
09:16 zakharyas joined
MasterDuke nice! 09:19
nine I wonder if it'd be possible to create the backtrace on-demand 09:22
MasterDuke that sounds like it could make Failures even cheaper 09:27
09:29 frost-lab left, frost-lab joined 10:04 sena_kun left 10:06 frost-lab left 10:08 sena_kun joined
jnthn nine: I think so long as we keep hold of the nqp::ctx result, the Backtrace frames themselves can be lazily built out of it 10:23
But maybe we already are and the cost is in the other bit
Maybe also Failure can just stick the nqp::ctx into its $!backtrace and expand it on first request into a full-on Backtrace object 10:24
nine jnthn: I'm pretty sure we can have it. But there are a few details to take care of. MVM_exception_backtrace doesn't work on an MVMContext. It needs an MVMException. A reason being that the MVMContext does not contain information about the position within that frame. 10:30
Throwing an exception on the other hand does not preserve the callers' current positions. So the longer you wait with generating the backtrace, the higher the chance to get a wrong result. 10:31
10:32 domidumont joined
nine But that's kinda true for MVMException also, since it stores the information in the frame extra. But that works only once. It refuses to set the same information again. So it preserves the original backtrace but causes future backtraces containing the same frames from being correct. 10:33
10:33 zakharyas left
nine I wonder if this is a problem with MVMContext in other....contexts as well. 10:34
If you got a call chain like A -> B -> C -> D -> nqp::ctx and later on A -> B -> E -> F -> nqp::ctx, the latter context will still think B is at the point where C was called. 10:35
jnthn Ah, yes, there's some assumption about how long you'll hang on to and use the ctx, I guess 10:38
The whole thing gets really icky too because you can have deopts and uninlining after the point of capturing the ctx 10:39
And "just snapshot the lot" doesn't work either because it's also vulnerable to that happening. 10:40
(The lexicals can move, for example.)
10:41 zakharyas joined
nine Funny how we're so eager to create backtraces for Failures (which we have to do get them right) but actually create them lazily for Exceptions. 10:43
jnthn Yeah... That's largely thanks to exception handlers running on the stack top :) 10:59
The best solution here would be to be able to eliminate the taking of the backtrace at all in the case the failure is getting disarmed immediately in the caller 11:01
Either by hoping it will fall out of generic optimizations, such as a combination of inlining and EA, or maybe something more targetted (dunno how that'd look)
12:11 zakharyas left 15:29 rba left 15:30 rba joined 15:31 camelia left 15:50 camelia joined 16:01 zakharyas joined 16:50 ggoebel left 17:52 domidumont left
nwc10 So, yesterday my GPW2021 t-shirt arrived in the post, so I put it on and uploaded a picture to the conference chat channel. 19:21
In the evening at the social (distancing) event the question came up of what t-shirt to wear the next day (today)
suggested various other German Perl Workshop t-shirts
corion said "no, obviously a Frankfurt YAPC::Europe t-shirt"
and I said "which one?"
so I'm wearing the alternative "Perl 6" version t-shirt that Pm produced 19:22
YAPC::Europe 2012 is (to the best of my knowledge) the only Perl conference whose t-shirt was cool enough to get a mock version 19:23
MasterDuke m: my $a; my $s := now; $a := now for ^100_000; say now - $s; 20:10
camelia 0.94673692
MasterDuke well, i haven't run any tests yet, but ^^^ now takes 0.03s on my machine 20:11
lizmat that will make a lot of people happy :-)
MasterDuke bunch of tests fail, but at least there's something here to work with 20:12
heh, make that lots of tests fail so far...
lizmat well, I think there's some nqp::timex in test ? 20:13
*Test
MasterDuke i pretty much know where the problems are, it's just a matter of getting all the cases right (e.g., Instant + Instant, Instant + Duration) 20:16
it's probably going to make sense to change Duration's $!tai to Int also, but i haven't done that yet 20:17
lizmat babysteps :-) 20:19
MasterDuke maybe will make some more progress later tonight or tomorrow, afk for now 20:20
20:52 vrurg left 20:54 sena_kun left 21:02 vrurg joined 21:19 zakharyas left 21:58 patrickb left
japhb Yeah, I'm definitely one of those for whom MasterDuke++'s improvements will be happy-making 22:15
I have un-PR-ed changes to various modules to include high-resolution timing information, which I haven't pushed upstream because I needed to use nqp::time_n to avoid timing having a very affect on code performance 22:16
MasterDuke cool. hopefully it won't take too long to get it both faster and passing tests... 22:20
jnthn I wrote Log::Timeline to use Instant on the assumption that at some point it'd be good enough :)
I'm happy that will happen
MasterDuke++