Geth MoarVM: 19245a16c3 | (Samantha McVey)++ | 3 files
Resolve unassigned codepoints as the defualt property values

Fixes bug with non-characters not returning "Cn" General Category. Likely fixes other issues as well.
00:25
samcv yay
lizmat :-) 00:29
01:14 colomon_ joined 02:57 ilbot3 joined 02:58 colomon joined 07:58 domidumont joined 08:04 domidumont joined 08:50 domidumont joined 09:18 robertle joined 09:48 nwc10 joined 09:49 cog_ joined 09:52 cog__ joined 10:04 coverable6 joined, unicodable6 joined, evalable6 joined 10:09 perlawhirl joined 11:22 zakharyas joined 11:42 cognominal joined 13:44 brrt joined
timotimo jnthn: you wanted me to remind you of the cstructarray pr :) 13:55
food time!
jnthn eats the PR 14:01
Oh, wait...
14:07 brrt joined
lizmat some kind of Czech dish ? 14:31
15:04 zakharyas joined 15:43 zakharyas joined
timotimo i have a benchmark that spends a lot of time in the spesh arg guard runner. turns out it's doing a whole lot of native_ref_fetch causing lots of repr_box_int; i bet we can change that so that it only checks "what's inside this ref?" rather than "gimme the contents. what is it?" 16:08
(a lot of time, of course, is quite relative; it's not much in total, but it's like 5% of total time?)
16:28 dalek joined, synopsebot joined 16:34 Geth joined
timotimo i'm not sure if i want to introduce a new guard op that directly checks for the resulting value's STable and that it's concrete 16:35
that'd directly target native refs
16:36 committable6 joined 17:20 Ven joined
jnthn timotimo: Hm, but the guard tree really shouldn't be being built in such a way that it tries to deref a native ref in the first place 17:37
timotimo all i know is that callgrind shows it calls native_ref_fetch a bunch of times 17:38
jnthn And derefs are always guarded by the check on the stable
So I think it's doing something it shouldn't be.
And could perhaps be fixed by making it not stick a deref in the guard tree
timotimo 879_416 calls are native_ref_fetch, 1_757_724 to rakudo_scalar_fetch 17:39
right, then it'd have to check against IntLexRef or IntPosRef rather than Int for the conc value check that's next? 17:40
jnthn But IntLexRef is a container type already
The guard tree looks at the type of the argument 17:42
And only then derefs and looks inside it
But for an IntLexRef it should just see it's that, and decide it's done or not
Not look inside at all
timotimo unfortunately i don't konw which exact arg guard tree it's doing that for 17:44
i can gdb it
jnthn Guard tress are dumped into the spesh log also
timotimo yup 17:47
but not really easily identifiable
there's no simple way to "break in this function but only if that other function is on the call stack" in gdb? 17:48
jnthn Hmm...can't think of one
Though could you do a conditional breakpoint in the arg guard tree on the REPR ID of the thing being deref'd?
For if it's something that's not a P6opaque?
timotimo probably 17:53
here's a suggestion to have a breakpoint in the "outer" function that enables the breakpoint on the "inner" function when entered and another that disables it when leaving the function 17:54
wow the conditional breakpoint is incredibly slow it looks like 18:00
(conditional on test->st->container_spec being not rakudo_scalar)
it's not getting hit :\ 18:07
18:17 zakharyas joined
samcv good * 18:22
18:37 domidumont joined 19:25 Ven joined 19:59 lizmat joined
timotimo well, the breakpoint didn't get hit during all the time i was afk 20:24
but it did for some reason fill up my ram quite nicely :|
20:26 AlexDaniel joined
[Coke] 19:26 -!- Ven is now known as Guest53736 20:28
oops. :)
Guest53736 uhm?
[Coke] accidental cut and paste.
Ven`` [Coke]: much better :p
timotimo well, i can no longer reproduce the native ref fetch from the arg guards 20:38
i must have run some different workload, or some change i made to the code made it go away
i have no clue how i got this data 20:47
i don't *think* there was an outdated callgrind file in that folder from long ago?
21:12 SourceBaby joined 21:43 Ven joined
lizmat jnthn: I guess my mental image of how the ThreadPoolScheduler works is flawed 21:55
somehow I thought that with a given input, say my @primes = ^50000 .hyper.grep: *.is-prime
you would always have the same number of tasks completed 21:56
I mean,. distributed over different number of threads, and other orders and what not
I've seen the number of completed tasks for the above code vary between 1596 and 1956 21:58
which I think is a wide range
timotimo maybe that's because if one of the workers "awaits" for new work that it either immediately continues or suspends itself into the job pool?
turns out if you compile moarvm with --optimize=3 instead of --optimize=0 ... 22:05
lizmat: -McSnapper 22:09
Geth MoarVM: samcv++ created pull request #743:
Speed up concat of longer strings by a significant amount
22:30
22:30 Ven joined
jnthn lizmat: How are you counting completed tasks? 22:41
lizmat I've added a $!total attribute to the Worker role which gets updated inside take-completed 22:42
that's what's being used for that tally
jnthn Ah, that should be relatively accurate
What timotimo said would apply though 22:43
lizmat yes, since the Workers get updated serially by the supervisor
that should be lossless
jnthn Right
Yeah, I guess it's exact to the last 10ms 22:44
lizmat yeah, but in our examples, all tasks are done by the end anyway, there are no tasks left queuer
*queued
jnthn OK. Then my first guess would be that sometimes, we get awaits that are not yet completed or other non-blocking contention 22:45
Which result in continuations being scheduled 22:46
I agree it's a fairly wide range, though
lizmat but would those be registered as tasks though ?
as completed tasks even
jnthn Yes 22:47
lizmat ok
well, it's been a long day... and it was very late last night
jnthn See ThreadPoolAwaiter or whatever it's called :)
lizmat ok
jnthn Uh, but not today ;)
lizmat I'll look at it tomorrow :-)
jnthn The thing you're looking for is the push onto the queue though :) 22:48
lizmat ok
timotimo arrived at home safely 23:12